id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
54513308 | pes2o/s2orc | v3-fos-license | The Economic Role of Petrochemical Industry in Iran
Iran’s economy is characterized by over dependence on the oil sector. Iran has been gradually growing into a centre for production of petrochemicals in the world. Petrochemical industry is one of the significant components of oil industry and is one of the principal industries in Iran which has an influential role in Iran’s economy. Although it is widely acknowledged that exports, particularly through manufactured components, play an important role as a potential source of economic growth. Hence, the aim of this research is to analysis the impact of petrochemical products export revenue on economic growth. Therefore the main objective of this research is the study of export-led growth hypothesis (ELG hypothesis) of Iran’s economy in the petrochemical industry by taking a time series data for the period of 1990-2010. It applies ordinary least square (OLS) method to investigate the relationship between gross domestic product, exports of petrochemical products, real exchange rate and inflation. The results of the study show that there is a positive relationship between export of petrochemical products and economic growth which validate export-led growth hypothesis in petrochemical industry while negative impact of inflation and real exchnage rate is observed.
Introduction
The relation of exports on economic growth has been the subject of numerous debates in the economic development and related literatures.Export promotion policy can help to decrease the foreign exchange gap, and consequently promote the importing of capital goods and technical knowledge, encouraging the internal production that leads to reduction in the unemployment, increase profitability, improve capacity utilization which in overall would cause to economic growth.In addition it will raise competition between domestic firms for achieving better production technology and output and effective allocation of recourses.This, in turn results enhancing the sales of goods in domestic and foreign markets, increasing in the income, economic growth and productivity of a country, an order of events that called the Export-Led Growth Hypothesis (ELG) (Bhagwati, 1978 andKrueger, 1978).
The export-led growth (ELG) hypothesis has been generally applied to examine the impact of export on economic growth.There are numerous studies advocate this hypothesis with discovering the positive correlation between exports and economic growth (e.g., Tyler, 1981;Feder, 1982;Krueger, 1986, Grossman andHelpman, 1991;Giles and Williams, 2000).In fact export performance has beneficial effect on economic growth due to the expansion of export can demand for output of country which consequently the country increase its output.In addition export expansion may support specialization in the making of export goods which can increase productivity level which may lead to output growth.Also increasing of export might relax a foreign exchange restriction.This assists importing input and in turn output expansion (Giles and Williams, 2000).Therefore raising export can be suggested as a scheme that entitles an economy to growth.Furthermore to date, the relation between inflation and economic growth is debaTable issue (Yogeswari et al. 2012).There are several empirical studies such as Faira and Cameiro (2002) and Singh and Kalirajan (2003) that argued on negative impact of inflation on economic growth.In contrast researcher such as Tobin (1965), Lucas (1973) and Gillman (2002) stated the positive relation of inflation and economic growth.However some of research mentioned the mix relation of economic growth and inflation which have non-linear interactions (Lee and Wong, 2005;Hwang and Wu, 2011).In addition, real exchange rate has important in the literature on export-led growth.Therefore earlier studies that link the real exchange rate with GDP are considered.Rodrik (2008) noted that overvaluation suffer growth which this idea supported by several researchers such as Paul (2006) andGala (2007).Rodriks's study come under critical examination with subsequent studies (Gluzmann, 2012) which general accepted the positive relationship between higher growth and undervalued exchange rate, this relation can affect increasing saving and investment that facilitate growth and moreover higher real exchange rate assist diversify exports and increase technological intensity of exports (Mario et al.).Therefore in this context relationship between these macroeconomic variables, GDP, export, inflation and real exchange rate, are considered.In addition due to oil dependency of developing countries and fluctuation of global oil market, this study turn its attention to the case of Iran which beside having huge reserves of oil resource and export, has tried to use a policies aim at increasing non oil exports.In fact the high rate of world oil prices has influenced the Iran government to have public investment, mainly in petrochemicals industries, and they achieve rapid grow.According to the economic development plan, the Iran government tries to elevate petrochemical output and because of this, the industry has obtained substantial foreign investment.Consequently, they doubt over the government's hope to make 47 petrochemical operations by the end of the fifth five-year development plan in 2015, adding a total of 43mn tones per annum (tpa) of capacity (Central Bank of IRAN, 2009).Based on officials, once the projects become operational, Iran will show at least 6.3% of global petrochemical output and 34% of Middle Eastern production.In the past, all Iranian petrochemical companies exported the products of Iran's petrochemical commercial company (IPCC) but today because of privatization in Iran, most of them export their products directly.Hence, nowadays, expansions of non-oil exports, especially petrochemical products are strategies for development of economy in Iran.Therefore this study tests the validity of ELG hypothesis in the case of petrochemical industry in Iran for the period of 1990-2010.
Methodology
It is widely acknowledged that exports, particularly through manufactured components, play an important role as a potential source of economic growth, the relationship between exports and economic growth is still ongoing.With regard to the general economic importance of foreign trade for the national economy and considering the importance of petrochemical products export in Iran, hence, the aim of this research is to analysis the impact of petrochemical products export revenue on economic growth.Therefore the main objective of this study is to investigate the relationship between export of petrochemical industry and economic growth of Iran.To do this, Ordinary Least Square (OLS) model are used.In particular, the study proposes to investigate link between export of petrochemical products and economic growth in order to test the degree of meaningful effects of export promotion policies in a branch of non-oil sector on the country's economic growth through empirical investigation of the Export-Led Growth (ELG) hypothesis.
Research Methodology Framework
In responding to the aim and objective of the study, quantitative approaches with mathematical and statistical methods for the period of 1990-2010 were used.Therefore the hypothesis has been evaluated by using Gretl software and ordinary least square (OLS) method.
Model Specification
The thesis uses Ordinary Least Square (OLS) model to test the export-led growth hypothesis in the context of the Iranian petrochemical exports products.Theoretically exports can contribute to economic growth by a simple model:
Y= f (X)
Where, Y refers to GDP and X to exports.
To capture the constant response of exports to GDP the below linear form model could be employed.
While for the non constant response of the same variable, the logarithmic model is effective.
The following model is suggested for estimating the effect of exports in the case of petrochemical products on gross domestic products along with real exchange rate and inflation during the study period of 1990 to 2010.As can be seen, the model is non-linear.The theoretical reason for this is that we do not necessarily expect a constant impact of an export stimulus on the economy over time, and hence a logarithmic model is more appropriate.
Results and Discussion
Regression through logarithmic technique prescribes relationship between independent and dependent variable.Table 1 indicates that there is a positive relationship between gross domestic product and exports of petrochemical products.In contrast both real exchange rate and inflation exhibit negative relationship with gross domestic product.p-value shows the probability value for significance of variable.All of independent variables are showing significant part in economic growth.In addition t-value also confirms all of independent variables are significant at 5% of confidence level.R 2 shows how much variation in dependent variable is because of independent variable.Value of R 2 is 0.98 in this result, which it satisfied the required range.
In statistics, the Durbin Watson statistic is used to detect the presence of autocorrelation (a relationship between values separated from each other by a given time lag) in the residuals (prediction errors) from regression analysis.The Durbin-Watson test is based on the following statistic, If e t is the residual associated with the observation at time t, then the test statistic is: The Durbin Watson (DW) statistic is 1.67, that indicate DW is in the interval of <dU;2>, that means statistically no autocorrelation in 5% critical values for Durbin Watson statistic (Table 2).The Breusch-Godfrey test is used also to assess the validity of some of the modeling assumptions inherent in applying regression-like models to observed data series.In particular, it tests for the presence of serial dependence that has not been included in a model structure and which, if present, would mean that incorrect conclusions would be drawn from other tests, or that sub-optimal estimates of model parameters are obtained if it is not taken into account.The regression models to which the test can be applied include cases where lagged values of the dependent variables are used as independent variables in the model's representation for later observations.This type of structure is common in econometric models.Therefore for being sure if autocorrelation exist, Breusch-Godfrey test also calculated by Gretl software.According to calculation results there are not first order autocorrelation at 5% of confidence level (Table 3).The possible existence of heteroscedasticity is a major concern in the application of regression analysis, including the analysis of variance, because the presence of heteroscedasticity can invalidate statistical tests of significance that assume that the modeling errors are uncorrelated and normally distributed and that their variances do not vary with the effects being modeled.When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made.One of these is that the error term has a constant variance.This might not be true even if the error term is assumed to be drawn from identical distributions.
One of the assumptions of the classical linear regression model is that there is no heteroscedasticity.Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance.Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect.Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong.For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population.The White test is a statistical test for detecting heteroskedasticity which are applied for data under study (Table 4).The calculation indicates that there are no Heteroscedasticity at 5% level of significant.An alternative to the White test is the Breusch-Pagan test.The Breusch-Pagan test is used to test for heteroscedasticity in a linear regression model.It tests whether the estimated variance of the residuals from a regression are dependent on the values of the independent variables.In fact the Breusch-Pagan test uses for conditional heteroscedasticity.It is a chi-squared test.If the Breusch-Pagan test shows that there is conditional heteroscedasticity, the original regression can be corrected by using the Hansen method, using robust standard errors, or re-thinking the regression equation by changing and/or transforming independent variables.Therefore to be sure if heteroscedasticity exist, Breusch-Pagan test also applied, which the results presented in Table 5.The calculation indicates that there are no Heteroscedasticity at 5% level of significant.The last test to our data analysis is normality test (Table 6).Normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.One application of normality tests is to the residuals from a linear regression model.If they are not normally distributed, the residuals should not be used in any tests derived from the normal distribution, such as t tests, F tests and chi-squared tests.If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc. Correcting one or more of these systematic errors may produce residuals that are normally distributed.The calculation indicates that random error has normal distribution at a 5% level of significant (Figure 1).If we consider α = 0.05 as a main level of significance for final decision, there are no autocorrelation in the model.Heteroscedasticity as well not exist in the model which is measured by white test and the Breusch-Pagan test.Normality of random variable is also in satisfying range.The summary is presented in Table 7.This implies that increasing in this variable encourage better performance while a fall decrease economic growth.Data fully is in agreement of export-led growth hypothesis.Growth of economy can be enhanced by exports of petrochemical product as a non-oil export through accessing the markets globally that in turn enhances economics of scale.Iran can enlarge its market for petrochemical products by exporting to international markets and with outward oriented strategy, able to account for an important share of the global market.
Therefore policies concentrating on export promotion, especially for petrochemical products which raw material is easily accessible in domestic market, should use effectively to fabricate export capacity in order to increase economic growth.Therefore trade barrier in this context should be overcome through proper policies with new and high technology should be considered.In addition In order to achieve high and sTable economic growth and to protect the economy from the negative effects of oil price fluctuations, the Iranian government should continue its quest for more efficient and effective non-oil export promotion policies as well as its diversification strategies aimed at weaning the economy from its dependence on the oil sector.However oil will undoubtedly continue to be the leading sector of the Iranian economy, pulling the other sectors in its wake.In this context trade barrier should be overcome through proper policies and an open trade policy will be an effective strategy for Iran in the long run.Therefore, it is proposed that the Iranian government continue the policy of trade liberalization, increasing its global competitiveness by decreasing barriers and restrictions on exports and imports.
The stabilization of the exchange rate helps to prevent overvaluation or devaluation, which can blunt the international competitiveness of potential export industries.
Conclusion
The result of our study shows that there is a positive relationship between export of petrochemical products and economic growth of Iran and validate export-led growth hypothesis.This shows that any increase in the export of petrochemical products can lead growth of economy, while any decreasing in the export of petrochemical products will decline economic growth.In fact growth of export can be raised by exports of petrochemical products via accessing the global markets that in turn increase economic growth.Therefore Iran should apply policies to make non-oil exports especially in petrochemical industry more competitive in order to gain access to international markets.For this reason, joining the WTO and raising the share of and diversity of non-oil exports in total exports should be considered as high priorities.Raising the quality of petrochemical export products, stabilizing the exchange rate, deregulating the banking sector as well as reforming the public sector would also lead to non-oil export expansion.As well, in order to utilize its comparative advantages, Iran should apply oil as much as possible in the domestic industrial sector via extensive enlargement of energy-based industries such as petrochemical industries.In addition, since the price of both crude oil and natural gas fluctuates highly, the Iranian government needs to look beyond these unrefined products.More investment in other petrochemical products will be necessary in order to use Iran's comparative advantage in oil-and gas-based industries as well as support the country from the wild fluctuations of these resources in their unrefined state.
However based on our data analysis inflation rate and real exchange rate exhibit negative relationship with GDP therefore it is better to apply proper exchange rate policy in the country to maintain international competitiveness and sustainable external balance of payments, hence, exchange rate policies should be revised and eliminate exchange rate instability.
In addition it is highly recommended to control fluctuation of inflation to overcome its negative impact on economic growth to achievement of macroeconomic stability through monetary and fiscal policies reforms which target inflation.Internal and external balances are necessary for macroeconomic stability, which leads to the trade-growth nexus dynamic.
Figure 1 .
Figure 1.Test of normality of residual Source: Authors Computation from Gretl software.
Table 6
Test for null hypothesis of normal distribution: Chi-square(2) = 3.255 with p-value 0.19646 Source: Authors Computation from Gretl software.
Table 7 .
Evaluation of econometric verification at a 5% level of significance | 2018-12-03T12:15:32.720Z | 2015-09-30T00:00:00.000 | {
"year": 2015,
"sha1": "7d43c0dcbfea6fd00fc222a115d11e8d6857841c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/mas.v9n11p101",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7d43c0dcbfea6fd00fc222a115d11e8d6857841c",
"s2fieldsofstudy": [
"Economics",
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
14303942 | pes2o/s2orc | v3-fos-license | The Effectiveness of Internet Cognitive Behavioural Therapy (iCBT) for Depression in Primary Care: A Quality Assurance Study
Background Depression is a common, recurrent, and debilitating problem and Internet delivered cognitive behaviour therapy (iCBT) could offer one solution. There are at least 25 controlled trials that demonstrate the efficacy of iCBT. The aim of the current paper was to evaluate the effectiveness of an iCBT Program in primary care that had been demonstrated to be efficacious in two randomized controlled trials (RCTs). Method Quality assurance data from 359 patients prescribed the Sadness Program in Australia from October 2010 to November 2011 were included. Results Intent-to-treat marginal model analyses demonstrated significant reductions in depressive symptoms (PHQ9), distress (K10), and impairment (WHODAS-II) with medium-large effect sizes (Cohen's d = .51–1.13.), even in severe and/or suicidal patients (Cohen's d = .50–1.49.) Secondary analyses on patients who completed all 6 lessons showed levels of clinically significant change as indexed by established criteria for remission, recovery, and reliable change. Conclusions The Sadness Program is effective when prescribed by primary care practitioners and is consistent with a cost-effective stepped-care framework.
Introduction
Depression is a common, recurrent, and debilitating problem [1]. Although evidence-based treatments exist, most people with depression do not obtain such treatment [2]. Of those who do access evidence-based treatments, approximately 25% do not improve [3]. Most adults who seek help for depression are treated in primary care settings where under-recognition remains high [4] and treatment may not be ideal [5]. Integration of mental health specialists into primary care sites has resulted in better treatment outcomes [6], but pragmatic and financial reasons are likely to preclude complete adoption of this model. One cost-effective and pragmatic means of increasing the quality of treatment available in primary care settings is through the use of internet-based cognitive behavioural therapy (iCBT) programs. Internet-based treatment affords many benefits over the traditional face-to-face modality, such as high fidelity, greater accessibility, convenience, and reduced cost to patients [7]. In a systematic review of 25 controlled trials (total n = 5,719) of iCBT for depression vs treatment as usual, placebo or wait list control [8] effect size superiority over the control groups ranged from zero to 1.18. Meta-analyses of RCTs of iCBT for depression have demonstrated moderate effect sizes that provide evidence that iCBT can be comparable to best-practice face-to-face CBT [7,9,10]. The CRUfAD Sadness Program (www.thiswayup.org.au/clinic) has been evaluated in a number of trials [11,12,13]. In a RCT comparing clinician-assisted to technician-assisted implementation of the Sadness Program, Cohen's d effect size superiority over the wait list control group at the end of therapy (PHQ9 scores) was 1.27, and this result was maintained at follow-up. Patient samples in research studies may be unrepresentative of those seen in primary care, but the highly structured and standardized nature of the iCBT Sadness program ensures that it can be transferred to routine practice in primary care without compromising treatment fidelity or efficacy. Whether effectiveness in practice parallels efficacy in RCTs is the core question addressed in the current paper by examining the progress of patients treated with the Sadness program in primary care. We evaluated this by examining the progress of patients treated with the Sadness program in primary care. In most efficacy studies, individuals with severe depressive symptoms and suicidal ideation are excluded from participation and are referred for traditional face-to-face treatment. Therefore systematic evidence that iCBT is appropriate for this population is lacking. We advise prescribing clinicians to exclude patients who are very severe or who are actively suicidal, but it is unknown to what extent clinicians adhere to this guidance. The current quality assurance study sought to quantify the proportion of individuals enrolled in the Sadness Program who were either severely depressed and/or expressing suicidal ideation and secondly, to determine whether treatment was effective for this group of patients.
Ethics Statement
The current paper was written as part of the Quality Assurance activities of St. Vincent's Hospital. At St. Vincent's Hospital the Human Research Ethics Committee does not consider clinical audits. That responsibility is vested with the Quality Assurance and Patient Safety Unit with whom a copy of this paper was lodged prior to submission. The Quality Assurance and Patient Safety Unit does not formally approve research projects, but assesses submitted reports for adherence to the Clinical Governance framework guidelines. The current quality assurance study adhered to these guidelines by examining the type of patient being prescribed the Sadness Program in primary care and reports on the effectiveness of the program. Data was necessarily confined to measures used as a routine to inform practitioners about the progress of their patients. All patients provided electronic informed consent that their pooled data could be used for quality assurance purposes.
The Sadness Program
The Sadness Program has been described in detail previously [11,12]. Patients were provided with a prescription from a GP or clinician registered with CRUfAD in order to enrol in the Sadness Program. As routine practice, prescribing clinicians were advised that patients are unlikely to benefit if they have very severe depression, persistent suicidal thoughts, drug or alcohol dependence, schizophrenia, bipolar disorder, or are taking atypical antipsychotics or benzodiazepines. Clinical responsibility was maintained by the prescribing clinician who received automatic updates via email regarding each patient's progress. The prescribing clinician also received an email alert if a patient's scores on the Kessler-10 (K10) Psychological Distress Scale indicated elevated distress or the patient endorsed suicidality on the Patient Health Questionnaire (PHQ-9). The Sadness Program was developed so that a patient cannot advance to the subsequent lesson without first completing the preceding lesson, downloading the associated homework components, and then waiting 5 days (to ensure sufficient time to review the materials and to complete the homework tasks). All patients have 10 weeks to complete the program and are encouraged to progress through each lesson at a pace of 1 lesson per every 1-2 weeks. Patient progress is tracked automatically through the CRUfAD Clinic system. The program consists of six online lessons representing best practice CBT as well as regular homework assignments and access to supplementary resources. Each lesson was designed using a cartoon narrative and included: psycho-education, behavioural activation, cognitive restructuring, graded exposure, problem solving, assertiveness skills, and relapse prevention.
Participants
Data were collected from 359 patients with a prescription for the Sadness Program from October 2010 to November 2011. The mean age of patients was 41.59 (SD = 14.15) and 59% were female. Fifty-four percent of patients were from an Australian rural or remote community (defined based on the geographical location of the prescribing clinician's practice). For secondary analyses, patients were classified into three groups based on the number of lessons completed. Patients who completed all six lessons are referred to as Completers. Non-Completers were separated into two groups based on evidence that the treatment dose curve peaks at lesson 4 [14], therefore patients who completed at least 4 lessons are referred to as Non-Completers and patients who completed between 1-3 lessons are referred to as Drop-Outs.
Measures
Patient Health Questionnaire. (PHQ-9; [15]). The PHQ-9 is a self-report questionnaire corresponding to the DSM-IV criteria for major depressive disorder. Each item is rated in frequency on a 4-point (0 = not at all, 3 = nearly every day) scale. Total scores range from 0 to 27 with higher scores reflecting higher levels of psychopathology. Depression severity categories correspond to the following scores: 0-9 = normal, 10-14 = mild, 15-19 = moderate, 20-23 = severe, 24-27 = very severe). A PHQ-9 score of $10 is used as a clinical cut-off for probable DSM-IV diagnosis of MDD [16]. The PHQ-9 demonstrates good psychometric properties and has been used extensively to measure treatment outcomes during internet CBT interventions targeting depression and anxiety [17,18].
Kessler-10 (K10) Psychological Distress Scale. [19]. The K10 consists of 10 items ranked on a five point scale designed to measure non-specific psychological distress. The K10 is completed prior to each lesson as a means of tracking patient distress. If a patient endorses high K10 scores (.35) or evidences an increase by more than 0.5 standard deviations, an automatic alert is emailed to the prescribing clinician. The K10 possesses strong psychometric properties [14,19,20].
World Health Organization Disability Assessment
Schedule-II. (WHODAS-II). The WHODAS-II contains 12 items designed to measure disability and activity limitation in the past 30 days in a variety of domains: 1) understanding and communicating, 2) self-care, 3) mobility, 4) interpersonal relationships, 5) work and household roles, and 6) community roles. Each of these domains loads significantly onto one underlying latent factor of global disability [21]. Scores range from 0 to 60, with higher scores indicating greater disability. The WHODAS-II demonstrates strong psychometric properties [22].
Statistical Analyses
Intent-to-treat (ITT) marginal model analyses were used to measures the change in outcome measures across time in the full sample (including drop-outs). This method accounts for missing data due to participant drop-out without assuming that the last measurement was stable (the last observation carried forward assumption-LOCF; [23]) and is appropriate for pre-post only designs [24]. Effects were modelled using the restricted maximum likelihood (REML) model estimation method with an autoregressive (AR1) covariance structure specified to account for the correlation between the measures at each time point. Simulation studies have demonstrated the superiority of complete case analysis over LOCF methods in pre-post designs [24,25]. For secondary analyses of Completers and Non-Completers, ANOVA and x 2 were conducted. The groups were compared on a range of variables including: age, sex, prescribing clinician's profession, rurality (yes, no), and pre-course mean scores on the PHQ9, K10, and WHODAS-II. Cohen's d within-group effect sizes were computed based on the pooled standard deviation, and corrected for repeated-measurements. Clinically significant change was calculated in Completers only. Clinically significant change was defined in three ways. Remission was defined as a post-treatment score below the optimal cut-score for a probable diagnosis of depression on the PHQ9 in patients who initially scored above threshold (.9). Recovery was defined as a reduction of at least 50% of pre-treatment PHQ9 scores. Reliable improvement was defined as a decrease in at least 5 points on the PHQ9 and a change in depression severity category based on PHQ9 cut-scores (i.e., from moderate to mild) from pre-to-post treatment based on the recommendations of [26]. All analyses were conducted using SPSS version 20.
Sample Characteristics
Of the 359 patients initially enrolled, 26.5% endorsed PHQ9 scores within the 0-9 subthreshold range, 26% were classified as mild, 23% as moderate, 17% as severe, and 7.5% as very severe. Table 1). These analyses were repeated in the sample meeting threshold criteria for a probable diagnosis of depression (PHQ9.10) with all main effects remaining significant.
Treatment Effectiveness for Severe and Suicidal Patients
Approximately one-quarter of patients prescribed the Sadness course had PHQ9 scores in the severe range (17% severe, 8% very severe). Additionally, 31% (n = 112) endorsed suicidal thoughts several days during the 2-week time period prior to commencing the program, 13% (n = 46) endorsed suicidal ideation for more than half the days, and 9% (n = 31) endorsed suicidal thoughts nearly every day during this time-frame. Of the patients endorsing suicidal ideation, the majority (53%) completed all 6 lessons (68% completed at least 4 lessons), suggesting that the presence of suicidal ideation was not a barrier to treatment adherence. To determine whether suicidal ideation was a barrier to treatment response (irrespective of baseline depression severity), marginal model analyses and Cohen's d within-group effect sizes were calculated in patients who endorsed suicidal thoughts at baseline (n = 189). Results are reported in Table 1. All main effects were significant, all p's#.001, with mean reductions corresponding to medium-large effect sizes. These analyses were repeated in severe patients (PHQ9$20) who endorsed suicidal ideation (n = 72). All main effects were significant, all p's#.001, with mean reductions corresponding to medium-large effect sizes.
Clinically Significant Change in Completers
Of those patients who met threshold criteria at baseline for a probable diagnosis of depression (PHQ9.9) and who completed all 6 lessons, 63% (91/144) evidenced remission (PHQ9, 9). Forty-nine percent (71/144) evidenced recovery (at least a 50% reduction in baseline PHQ9 scores). For clinically reliable change, 54% (n = 77/144) evidenced a reduction of at least 5 points on the PHQ9 in combination with a change in depression severity category from higher to lower. Clinically reliable change was unrelated to the profession of the prescribing clinician, x 2 (4) = 4.08, p..05.
Treatment Response in Completers, Non-Completers and Drop-Outs
As the K10 was administered prior to each lesson, data were available to calculate reductions in distress as a function of each lesson completed irrespective of program completion. A significant reduction in distress was defined as at least a 1 SD (7.5) decrease in K10 scores from lesson 1 to the final lesson completed. This was a conservative definition as decreases of 7 points have been shown to correspond to reliable improvement [27]. Table 2 reports the proportion of patients who demonstrated a significant reduction based on this criterion. Consistent with previous findings that the greatest reduction in distress scores occurs within the first four lessons of the Sadness Program [14,28], nearly half (44%) of patients who dropped out after lesson 4 appeared to have benefitted from the program.
Discussion
The current study provides evidence of the effectiveness of an internet-based CBT (iCBT) program for depression, the Sadness Program, prescribed by primary-care practitioners. Findings indicate that the Sadness Program is effective in reducing depressive symptoms, distress, and impairment with corresponding medium-large effect sizes in primary outcome measures. Approximately 55% of patients who completed all 6 lessons of the program evidenced clinically significant change as indexed by established criteria for remission, recovery, and reliable change.
There have been mixed findings regarding the effectiveness of standard CBT in treatment of severe depression [29]. In the current study, one-quarter (25%) of patients were in the severevery severe range for depression symptoms. The results of the current study suggest that the Sadness Program is effective in reducing depressive symptoms, distress, and impairment in a large proportion of patients presenting with symptoms in the severe range. The current findings also suggest that the presence of suicidal ideation is not a barrier to treatment adherence or response as patients with active suicidal thoughts demonstrated significant reductions in all primary outcome variables with corresponding medium-large effect sizes. The population attributable ratio (PAR) for depression in suicidal behaviour has been estimated at 80 percent, meaning that 80% of suicidal behaviour would be eradicated if depression did not occur [30], therefore the current findings of reduced depressive symptoms and suicidal ideation are not trivial. The Sadness Program is designed to automatically alert the prescribing clinician (via email) of suicidal ideation or increased patient distress; however it remains the responsibility of the prescribing clinician to act on these alerts. It is also important to identify patients who are not responding given evidence that recurrence is partially influenced by symptom reduction during treatment. Research has demonstrated that individuals who were asymptomatic at follow-up had much lower rates of recurrence than treated individuals with residual symptoms at follow-up (66% vs 87%, respectively) [31].
Conversely, a consequence of under-recognition of subthreshold or mild depression is that effective early interventions strategies, that may otherwise reduce the risk of patients becoming fully symptomatic, are not being optimally used [32]. It is interesting to note that nearly 27% of patients prescribed the Sadness Program did not meet threshold criteria for a probable diagnosis of depression. It is unknown whether these patients were in fact subthreshold, were patients in remission from a previous episode, or were patients who were simply mis-prescribed the program. It would appear that the program material was relevant to the majority of these patients so it is unlikely that patients were incorrectly identified. The decision to retain these subthreshold cases in the current analyses was made on the basis that the primary aim of the current study was to evaluate the effectiveness of the Sadness Program as it is prescribed in routine practice. We presume that these patients were either experiencing chronic mild depression or were patients with a history of recurrent depression who were currently in remission and may have been interested in learning how to better manage their symptoms. Inclusion of routine questions regarding depression history may be a valuable addition to the Sadness Program.
Program adherence (54%) in the current study was lower than reported in two previous RCTs investigating the efficacy of the Sadness Program [11,12], 74% and 75%, respectively, but was consistent with median completion rates (56%) reported in a metaanalysis of computerized CBT treatments [33]. The only variable related to program completion was age: Completers were significantly older than Non-Completers and Drop-Outs. Research is required to assess variables that may interfere with treatment commitment, adherence, and response. It is important to note that a large proportion of patients evidenced a significant reduction in global distress prior to drop-out, suggesting benefit despite program non-completion. Future research could determine whether reducing the length of the program leads to better adherence rates. The current findings need to be considered in light of a number of limitations. As data was collected in the context of routine clinical practice there was no comparison group, hence the effects of natural remission or placebo response over the 10 weeks cannot be separated from response to the specific treatment. It is also important to note that we did not collect information on medication use. It is likely that a proportion of patients presenting to GPs would have also been prescribed a course of antidepressants; however clinically significant treatment response did not vary as a function of the clinician being licensed to prescribe medication. Further, the within-group effect size for change on the PHQ9 was not larger than that obtained in the RCTs of the Sadness Program (where patients who had commenced or altered their antidepressant medication within the last month were excluded). Data were collected to ensure ongoing quality assurance. A benefit of this type of data is that it provides an indication of treatment effectiveness in absence of confounding variables that may indirectly influence treatment response. Had data been collected in the context of a field RCT, the motivation of primary care clinicians to adhere to the Sadness Program guidelines may have increased as a function of external evaluation. Furthermore, the use of no treatment or wait-list control comparisons once efficacious treatments have been identified is increasingly raising ethical questions [34], particularly in the context of primary care. Effectiveness research is aimed at evaluating the feasibility, acceptability, and effectiveness of treatments in environments where most patients will be treated [35]. We are confident that the current results reflect routine practice and therefore can be generalized to other primary care settings.
In conclusion, requisite conditions for improving treatment of depression in primary care have been identified and include organized treatment programs, monitoring of treatment adherence, and guidance from mental health specialists as educators and consultants [36]. The Sadness Program, and other iCBT treatments (including CRUfAD's GAD Program [37]) represents a cost-effective means of ensuring these criteria are met. The current paper provides preliminary evidence of the effectiveness of this form of treatment delivery in primary care and is consistent with a stepped-care framework identified in Australian and NICE guidelines as the method by which treatments for depression, and other disorders, should be delivered [38,39]. | 2016-05-04T20:20:58.661Z | 2013-02-22T00:00:00.000 | {
"year": 2013,
"sha1": "eb7aa7565a7c37888d48cd5eaef10366d64e2fec",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057447&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb7aa7565a7c37888d48cd5eaef10366d64e2fec",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229352925 | pes2o/s2orc | v3-fos-license | Studies on the Effect of Oil and Surfactant on the Formation of Alginate-Based O/W Lidocaine Nanocarriers Using Nanoemulsion Template
The application of various nanocarrier systems was widely explored in the field of pharmaceuticals to achieve better drug encapsulation and delivery. The aim of this study was to encapsulate lidocaine in alginate-based o/w nanocarriers based on the type of oil (i.e., solid or liquid), using a nanoemulsion template prepared by ultrasound-assisted phase inversion temperature (PIT) approach. The nanoemulsion template was initially prepared by dissolving lidocaine in the oil phase and surfactant and alginate in the aqueous phase, and keeping the PIT at around 85 °C, accompanied by gradual water dilution at 25 °C, to initiate the formation of nanoparticles (o/w) with the aid of low frequency ultrasound. The composition and concentration of the oil phase had a major impact on the particle size and led to an increase in the size of the droplet. The lipids that showed a higher drug solubility also showed higher particle size. On the other hand, increasing the concentration of surfactant decreases the size of the droplet before the concentration of the surfactant exceeds the limit, after which the size of the particle increases due to the aggregates that could be produced from the excess surfactant. The method used produced nanoemulsions that maintained nano-sized droplets < 50 nm, over long-term storage. Our findings are important for the design of nanocarrier systems for the encapsulation of lipophilic molecules.
Introduction
Nanoemulsions (NEs) are metastable nanocarrier systems comprising a mixture of immiscible liquids in which the dispersed droplets are of average size, between 20 and 500 nm [1]. The system appears to be transparent whereby signs of instability in the formulation becomes apparent in the form of turbidity. It is noticeable that the system is highly susceptible to destabilization, primarily due to the Ostwald ripening [2]; a process that results from the difference in solubility between droplets of different sizes [3]. It occurs due to the mass transport of smaller droplets of the dispersed phase through the continuous phase to reach larger droplets, which then grow in size. In order to achieve a long-term stable formulation that can deliver both hydrophilic and hydrophobic drugs, proper operation with appropriate selection of surfactants and method of preparation is essential [4,5]. Nanoemulsion provides a means to dissolve low solubility drugs, while protecting them from hydrolysis and enzymatic degradation [6].
The small size of the nanoemulsion droplets provides many advantages over other formulations. Droplets can withstand Brownian motion and force of gravity, which plays a major role in physical of each other, its effect on their combined solubility is small [23]. Our approach aims to use lidocaine solely as prilocaine is associated with methemoglobinemia [24]. Furthermore, we did not come across a work where lidocaine was used alone as a dispersed phase in nanoemulsions or solid lipid nanoparticles, or prepared using the PIT method. The stability of lidocaine NEs, which were surrounded by surfactant-alginate layers, was also evaluated to understand the effect of ultrasound-assisted phase inversion temperature (PIT) method on nanoemulsion properties, upon the incorporation of alginate.
Materials
Lidocaine was donated by Gulf Pharmaceutical Industries (Julphar, UAE). Oleic acid was supplied by Avonchem, (Macclesfield, UK), beeswax by Acros organics, (Geel, Belgium), and coconut oil by LabChem Inc., (Pennsylvania, PA, USA). Tween 80 was purchased from Sigma-Aldrich, (Missouri, MO, USA). Sodium alginate was obtained from Avonchem (Macclesfield, UK). Chemicals for HPLC analysis included water for HPLC, which was obtained from Fisher Scientific, (Loughborough, UK) and acetonitrile and glacial acetic acid which were purchased from VWR Chemicals BDH, (Lutterworth, UK). Cellulose dialysis membrane used for the entrapment study was bought from Samco Silicone Products (Nuneaton, UK).
Solubility of Lidocaine in Lipids
The solubility of lidocaine in the lipids used in the preparation of nanoemulsion was assessed using the method described earlier [25]. A total of 1 mL of each lipid was transferred to a beaker and placed on a hot plate. A total of 50 mg of lidocaine was then added to the lipid and allowed to dissolve. The addition of lidocaine was continued in increments of 50 mg, until the mixture showed signs of crystallization. The mixture was then diluted with a suitable solvent and the amount of lidocaine used was analyzed using HPLC.
Phase Inversion Temperature Measurements
The phase inversion temperature was determined by measuring the turbidity change of the system with temperature change. Measurements were performed using the Litesizer 500 (Anton Paar, Graz, Austria). The samples were kept in quartz cuvette and placed in the measuring chamber. They were subjected to a controlled heating/cooling cycle using the Peltier temperature-control device. The samples were heated from 25 to 90 • C at a rate of 5 • C/min, retained at 90 • C for 10 min, and then cooled from 90 to 25 • C at a rate of 5 • C/min. Turbidity versus temperature curves at 660 nm were plotted.
Preparation of Alginate-Based Lidocaine Nanocarriers Using the Nanoemulsion Template
Lidocaine nanoemulsion was prepared as per the method suggested by Sarheed et al., with some modifications [25]. The nanoemulsion was prepared by combining low-energy method-phase inversion temperature-and high-energy method, ultrasonic homogenization. The formulation was prepared using different oil types at different concentrations, with varying surfactant concentrations. The nanoemulsion formulations were also prepared in the absence of the drug to compare drug formulations with blank formulations.
The nanoemulsion consisted of two phases-the oil phase and the water phase. The water phase was prepared by mixing 25 mL of 0.5% of sodium alginate with Tween 80 at varying amounts; 0.75 g, 1.05 g, and 1.50 g. The mixture was then kept on a magnetic stirrer and heated to 85 • C. The oil phase consists of a mixture of lidocaine and the lipid. The lipids used were oleic acid, coconut oil, and beeswax. A stock formulation of the oil phase was prepared by adding 1.2 g of lidocaine to 0.05 g of oil and heated in order to dissolve the drug in the oil. From this stock, two different amounts of oil phase were weighed; 0.15 g and 0.3 g, and heated. When both phases reached 85 • C, the water Pharmaceutics 2020, 12, 1223 4 of 21 phase was added to the oil phase, dropwise, with constant homogenization using the Ultra-Turrax ® homogenizer (IKA T25, Staufen, Germany) at 8500 rpm for 5 min. Then, 100 mL of distilled water also heated at 85 • C was added to the mixture with constant homogenization. The resulting concatenations were determined to be 1.2 mg/mL and 2.4 mg/mL. The final concentration of water was kept constant at 97.5% wt.
The resultant nanoemulsion was subjected to ultrasound using a probe sonicator (300 V/T ultrasonic homogenizer, BioLogics Inc., Houston, TX, USA), at 20 kHz and power 70% for 5 min.
Surfactant Concentration and Oil Type and Composition
A total of 18 nanoemulsion formulations were prepared; 6 for each oil, the amount of surfactant and oil phase for each surfactant-to-oil ratio (SOR) is shown in Table 1. Two groups of NEs were prepared. First, the total lipid content was kept constant, while the SOR was varied by altering the amount of surfactant and vice versa for the second group where the influence of oil content was evaluated. Blank formulations were also prepared using the same method but without lidocaine, whereby the oil phase consisted of the lipid only.
Drug Entrapment Efficiency
Lidocaine entrapment in nanoemulsion was measured by calculating the amount of free drug present in the aqueous phase using the cellulose dialysis membrane method. A cellulose membrane was used with a molecular weight cut-off 3500 Dalton, which was soaked in phosphate buffer solution (PBS) at pH 7.4 overnight, prior to use. A total of 3 mL of the sample formulation was placed in the dialysis membrane and then tightly closed from both sides. The membrane was then immersed in a 100 mL receptor compartment consisting of PBS (pH 7.4) and ethanol, at a ratio of 80:20, to ensure sink conditions. PBS was prepared by mixing 0.5 g of disodium orthophosphate and 0.3 g of potassium dihydrogen phosphate, pH adjusted by pH meter (Sper Scientific Direct, Scottsdale, AZ, USA). The system was covered and placed in a mechanical shaker (Scichem Tech, Bilston, UK) for 24 h. Sample was taken from the receptor compartment and analyzed using HPLC, to determine the amount of free drug that crossed the membrane. Entrapment efficiency was calculated using the following Equation (1): Wa-amount of drug added to the formulation, Ws-amount of unencapsulated drug measured in the supernatant.
Particle Size Measurements
Nanoemulsion droplet size measurement was taken using Litesizer 500 (Anton Paar, Graz, Austria), which uses the dynamic light scattering technique. The samples were measured in standard disposable cuvette, at 25.0 • C and the measurement angle was set on back scatter at an angle of 175 • . Droplet size was presented as a mean hydrodynamic diameter. The Stokes-Einstein equation (Equation (2)) was used to calculate the D h as follows: Pharmaceutics 2020, 12, 1223
of 21
where D h is the particle hydrodynamic diameter, K B is the Boltzmann's constant, T is the absolute temperature, D is the translational diffusion coefficient, η is the viscosity of the aqueous phase (Pa·s). Each measurement was as a series of 5 repetitions per sample and the mean particle size and standard deviation were determined. The viscosity of 0.5% sodium alginate solution was 24 m Pa·s and was considered to be the viscosity of the dispersant phase during particle size measurements. It was also considered low to affect the DLS measurement. The samples were measured without dilution, as they were very diluted (97.5% wt.) so the effects of multiple scattering could be avoided. The particle size distribution by number was also determined and a relative refractive index, which is the ratio of refractive index of lipids, oleic acid (1.463), coconut oil (1.430), and beeswax (1.444), to that of the dispersion medium (1.33)-of 1.09, 10.7, and 1.08, respectively, was assumed in the calculation of the particle size distributions. The particle size measurements were also reported as the mean diameters (d 43 and d 32 ) calculated using Equations (3) and (4) respectively: where n i is the number of droplets of diameter d i .
Litesizer 500 also offers information about the polydispersity index of the samples, which indicates the breadth of the size distribution.
The polydispersity index (PDI) correlates with the slope of the decay curve. It could be calculated as described in the photon correlation spectroscopy norm (ISO-13321), the cumulant fit is a polynomial fit. The fit function could be written as: with PDI: Zeta potential was also measured by the Litesizer 500 using electrophoretic light scattering (ELS), which measures the speed of particles in the presence of an electric field. The sample was placed in Omega cuvette, closed with the tips and placed in the measuring chamber. Measurements were made at temperature 25 • C. Each measurement was as a series of 3 repetitions per sample and the mean zeta potential and standard deviation were determined.
Stability Studies
The effect of different surfactant-to-lipid ratios on the NEs stability was studied at room temperature (25 • C), over a period of six months. The dispersions were regularly examined for particle size as well as changes in physical appearance, such as gelation, precipitation, and crystallization.
Quantification of Lidocaine
Lidocaine solubility and entrapment study samples were analyzed by high performance liquid chromatography (HPLC), based on the method reported by the Lee group, with some modifications [26].
The chromatographic column used was Onyx™ monolithic C18, 100 × 4.6 mm, 130 Å, USA. The column temperature was maintained at 25 • C and the volume of injection was 20 µL. The pump used was LC 20AD, Shimadzu, Japan, and the detector was UV visible detector (SPD20A, Shimadzu, Japan). The mobile phase consists of HPLC water and glacial acetic acid, mixed at a ratio of 930:50, and pH was adjusted using 1 N sodium hydroxide to 3.4. Gradient elution was used, in which 4 parts of this solution were allowed to flow through one pump, while 1 part of acetonitrile was allowed to flow through the other. The total mobile phase flow rate was 0.5 mL/min. The standard stock solution of lidocaine was prepared by accurately weighing 25 mg of lidocaine, dissolving in 3 mL ethanol and making up the volume with water to 25 mL, to obtain a concentration of 1 mg/mL. A series of dilutions were then prepared from the stock solution to obtain solutions of concentrations, 0.05, 0.1, 1, 10, 50, 100, 200, 400, and 600 µg/mL. Chromatograms were integrated at 237 nm and at a retention time between 3.5 and 5.1 min. The calibration curve was plotted between AUC and concentration.
Statistical Analysis
Quantitative data were obtained in triplicates and are reported as mean ± standard deviation. Statistical analysis was performed using Minitab version 19 software. Student's t-test was performed as well. A p < 0.05 was considered to be statistically significant.
Lipid Solubility of Lidocaine
The amount of lidocaine solubilized in the hot melted lipids was analyzed by HPLC. For HPLC analysis, a series of dilutions from a stock solution of 1000 µg/mL were prepared to obtain concentrations of 0.05, 0.1, 1, 10, 50, 100, 200, 400, and 600 µg/mL and AUC was integrated at 237 nm [26,27]. The calibration curve plotted between AUC and concentration, is shown in Figure 1. Lidocaine is a lipophilic molecule with a reported log P of 2.44 [28]. Lidocaine solubility was higher in oleic acid (406 mg/mL) than in beeswax (347 mg/mL). Similar results was recently reported by Hamed et al. [29]. While lidocaine's solubility in coconut oil was found to be the lowest of 64 mg/mL. Oleic acid had a partition coefficient of 7.64, which could provide the highest solubilizing capacity for lidocaine, followed by beeswax and coconut oil [30]. Beeswax has fatty acid esters that give more polar properties compared to oleic acid, so that less lidocaine is solubilized [31]. Whereas, coconut oil is the most polar among other lipids used in this study due to its high lauric acid composition [32]. This limits its ability to dissolve lidocaine. Generally, oil molecules with a small molecular volume or high aromaticity produce a strong solvation effect, resulting in a higher penetration of the oil molecules into the surfactant chain layer, thus improving the rigidity and curvature of the interface [33]. This would also affect the particle size, as various lipids are added.
Pharmaceutics 2020, 12, x 6 of 22 concentration of 1 mg/mL. A series of dilutions were then prepared from the stock solution to obtain solutions of concentrations, 0.05, 0.1, 1, 10, 50, 100, 200, 400, and 600 μg/mL. Chromatograms were integrated at 237 nm and at a retention time between 3.5 and 5.1 min. The calibration curve was plotted between AUC and concentration.
Statistical Analysis
Quantitative data were obtained in triplicates and are reported as mean ± standard deviation. Statistical analysis was performed using Minitab version 19 software. Student's t-test was performed as well. A p < 0.05 was considered to be statistically significant.
Lipid Solubility of Lidocaine
The amount of lidocaine solubilized in the hot melted lipids was analyzed by HPLC. For HPLC analysis, a series of dilutions from a stock solution of 1000 μg/mL were prepared to obtain concentrations of 0.05, 0.1, 1, 10, 50, 100, 200, 400, and 600 μg/mL and AUC was integrated at 237 nm [26,27]. The calibration curve plotted between AUC and concentration, is shown in Figure 1. Lidocaine is a lipophilic molecule with a reported log P of 2.44 [28]. Lidocaine solubility was higher in oleic acid (406 mg/mL) than in beeswax (347 mg/mL). Similar results was recently reported by Hamed et al. [29]. While lidocaine's solubility in coconut oil was found to be the lowest of 64 mg/mL. Oleic acid had a partition coefficient of 7.64, which could provide the highest solubilizing capacity for lidocaine, followed by beeswax and coconut oil [30]. Beeswax has fatty acid esters that give more polar properties compared to oleic acid, so that less lidocaine is solubilized [31]. Whereas, coconut oil is the most polar among other lipids used in this study due to its high lauric acid composition [32]. This limits its ability to dissolve lidocaine. Generally, oil molecules with a small molecular volume or high aromaticity produce a strong solvation effect, resulting in a higher penetration of the oil molecules into the surfactant chain layer, thus improving the rigidity and curvature of the interface [33]. This would also affect the particle size, as various lipids are added.
Phase Inversion Temperature
The phase inversion temperature was identified by measuring the %Transmission by Liteszier 500. The results showed an increase in turbidity when the formulation was heated, which indicated the phase inversion temperature. The PIT was detected at 85 °C, marked by a decrease in the % transmission [25]. Subsequent emulsion cooling reveled a drop in turbidity by an increase in %
Phase Inversion Temperature
The phase inversion temperature was identified by measuring the % transmission by Litesizer 500. The results showed an increase in turbidity when the formulation was heated, which indicated Pharmaceutics 2020, 12, 1223 7 of 21 the phase inversion temperature. The PIT was detected at 85 • C, marked by a decrease in the % transmission [25]. Subsequent emulsion cooling reveled a drop in turbidity by an increase in % transmission. This confirmed the transition from w/o emulsion into o/w emulsion [25]. The knowledge of PIT was particularly important during the homogenization, as surfactants could lose their ability to stabilize emulsions if the homogenizer temperature was too close to PIT, due to rapid droplet coalescence [34]. Increased concentration of surfactants had the same effect on the cloud point and was useful in determining the PIT. Based on the results obtained, the PIT process needs to be maintained at constant temperature between 75 and 85 • C, to ensure the formation of bicontinuous microemulsion [25]. At this higher temperature, the interfacial tension decreased and the amount of surfactant adsorbed on the oil-water interface increased gradually, until saturation was achieved [35]. This also guaranteed the formation of stable nanoemulsions, with smaller particles upon cooling and dilution, and the prevention of droplets coalescence.
Preparation of Alginate-Based Lidocaine Nanocarriers from Nanoemulsion Template
Both blank and lidocaine-loaded nanoemulsions were prepared using the low energy method; phase inversion temperature, followed by a high energy method; ultrasonication. Formulations were prepared using three different lipids; oleic acid, coconut oil, and beeswax, and at six different surfactant-to-oil ratios; 5:1, 5:2, 7:1, 7:2, 10:1, and 10:2. Physical appearance of all formulations was noted during the preparation process, after cooling, and throughout the storage period at room and cool temperature. The preparation method used was the same as that proposed previously by Sarheed et al., with modifications [25]. Sodium alginate was added to the formulation in an attempt to produce a nanoemulsion with a controlled release.
In the preparation of the nanoemulsions, Tween 80 and sodium alginate were mixed and heated, where the turbidity of the mixture increased as the temperature reached the phase inversion temperature; 85 • C. As the process continued by mixing the water phase with the oil phase, followed by the dilution, the formulation remained turbid, as the temperature remained at 85 • C. The appearance of turbidity indicated the conversion of the system from o/w to w/o. Keeping the temperature as high as 85 • C, with water dilution, showed that the NEs droplet size would be reduced to 85 nm [36].
Then, each lidocaine formulation was allowed to cool by removing it from the hot plate, allowing it to clear up and the final formulation was a clear and transparent dispersion. The formulations remained transparent both before and after ultrasonication, to help disrupt any droplet aggregates formed during the mixing process Ultra-Turrax ® homogenizer. The transparency of the formulation suggests a of small droplet size formulation, which was maintained throughout the storage period. None of the drug formulations showed any signs of instability such as creaming, precipitation, or crystallization. Nor did any of them display any formation of clumps, suggesting that the concentration of sodium alginate was ideal for the physical stability of NEs.
The cooling of the blank formulations showed different results than those of the drug formulations. The turbidity decreased after cooling, but did not reach the transparency of the drug formulations and following ultrasonication, turbidity was further reduced. It is well-known that hydration of the ethylene oxide groups of Tweens would increase significantly by reducing temperature and dilution, and promote the preferred curvature change of the surfactant monolayer, and consequently, the tendency of oil droplet formation [37]. All blank formulations appeared translucent with some showing signs of instability within few weeks. After a duration of 7 months, lidocaine formulations retained a stable transparent appearance, while blank nanoemulsions displayed signs of instability that appeared as creaming. Figure 2 shows all formulated nanoemulsions both lidocaine-loaded and blank. Lidocaine was found to possess surfactant-like properties due to its amphiphilic structure, thus improving lidocaine-containing nanoemulsions stability. This property is discussed later in the study.
Entrapment Efficiency
After placing the nanoemulsion formulation in the dialysis membrane for 24 h, the receptor compartment was analyzed using HPLC. Figure 3 shows HPLC chromatograms of the representative formulations prepared at surfactant-to-oil ratio 7:2. It was found that the nanoemulsion formulation was successful in encapsulating the drug within the formulation, with an entrapment efficiency of about 96.9 ± 0.4 % for all formulations. This high EE could be attributed to the low surface tension between droplets that prevented their coalescence, which was confirmed by the absence of phase separation and thus enhanced lidocaine solubility and its retention in nanoemulsions [38]. Moreover, lidocaine is a weak base due to the presence of terminal amine group-N-(CH3)2 that can accept a hydrogen ion and turn it into a positively charged cationic form. This could enable lidocaine to form H-bonds with hydroxyl moieties of Tween 80 and alginate, and thus improve the NEs encapsulation. This behavior could affect the release properties of lidocaine from NEs for various pharmaceutical applications such as transdermal drug delivery. Further studies are therefore needed to assess this effect.
Entrapment Efficiency
After placing the nanoemulsion formulation in the dialysis membrane for 24 h, the receptor compartment was analyzed using HPLC. Figure 3 shows HPLC chromatograms of the representative formulations prepared at surfactant-to-oil ratio 7:2. It was found that the nanoemulsion formulation was successful in encapsulating the drug within the formulation, with an entrapment efficiency of about 96.9 ± 0.4% for all formulations. This high EE could be attributed to the low surface tension between droplets that prevented their coalescence, which was confirmed by the absence of phase separation and thus enhanced lidocaine solubility and its retention in nanoemulsions [38]. Moreover, lidocaine is a weak base due to the presence of terminal amine group-N-(CH 3 ) 2 that can accept a hydrogen ion and turn it into a positively charged cationic form. This could enable lidocaine to form H-bonds with hydroxyl moieties of Tween 80 and alginate, and thus improve the NEs encapsulation. This behavior could affect the release properties of lidocaine from NEs for various pharmaceutical applications such as transdermal drug delivery. Further studies are therefore needed to assess this effect.
Effect of Surfactant Concentration on Particle Size
The mean droplet diameter for lidocaine nanoemulsion is shown in Figure 4. Results showed that the droplet size of almost all prepared formulations was <140 nm, 39% of which was <60 nm. The mean droplet size of the smaller particles was found to be 15.3, 15.0, and 17.0 nm for nanoemulsions formulated using beeswax, coconut oil, and oleic acid, respectively. The mean size of the larger particles, on the other hand, was found to be 465.7, 517.0, and 534.0 nm. The average hydrodynamic diameter was determined according to the ISO-13321 (1996) and it might lead to misinterpretation in the case of polydisperse systems. In order to have better knowledge of the nanoemulsions, complete particle distribution such as d 32 and d 43 were also reported in this study [34]. This is discussed at a later point in the study.
To identify the relationship between the concentration of surfactant and particle size, nanoemulsions with different surfactant concentrations were prepared and their droplet size was measured. It is worth mentioning that the critical micellar concentration (CMC) of Tween 80 was 0.012-0.015 mM [25]. The surfactant concentration used in this study was between 0.57 to 1.15 mM, which was above the CMC to ensure high drug solubilization and loading, optimal particle size, and long-term stability.
Effect of Surfactant Concentration on Particle Size
The mean droplet diameter for lidocaine nanoemulsion is shown in Figure 4. Results showed that the droplet size of almost all prepared formulations was <140 nm, 39% of which was <60 nm. The mean droplet size of the smaller particles was found to be 15.3, 15.0, and 17.0 nm for nanoemulsions formulated using beeswax, coconut oil, and oleic acid, respectively. The mean size of the larger the case of polydisperse systems. In order to have better knowledge of the nanoemulsions, complete particle distribution such as d32 and d43 were also reported in this study [34]. This is discussed at a later point in the study. To identify the relationship between the concentration of surfactant and particle size, nanoemulsions with different surfactant concentrations were prepared and their droplet size was measured. It is worth mentioning that the critical micellar concentration (CMC) of Tween 80 was 0.012-0.015 mM [25]. The surfactant concentration used in this study was between 0.57 to 1.15 mM, which was above the CMC to ensure high drug solubilization and loading, optimal particle size, and long-term stability.
At lower oil concentrations (0.15 g), droplet sizes decreased significantly (p < 0.05); 129.1 ± 28.4 nm, 69.4 ± 56.9 nm and 18.8 ± 0.5 nm, with an increase in surfactant concentration, as the surfactantto-oil ratio was 5:1, 7:1, and 10:1, respectively. This was noticeable when oleic acid was the lipid used and could be due to the surface activity of the surfactant and its solubilization capacity. Surfactants decreased the interfacial tension between the oil and water phases, thus decreasing the amount of free energy required to deform or disrupt the droplets, which resulted in a smaller droplet diameter. They might also form a protective coating around the droplets and prevent them from coalescing one another. However, it is important for the emulsifier molecules to adsorb rapidly enough around the droplets, in order to form this protective interfacial layer [34]. At an SOR of 10:1, the droplet size was the smallest droplet size of 18.8 nm. This might indicate that a monolayer of surfactant was surrounding the oil droplets, taking into account that the length of hydrophilic chain for Tween 80 was 3.8 nm, as stated by Shukla et al. [23,39].
In coconut oil formulations, the droplet size showed an initial decrease (p < 0.05) from 112.5 ± 34.2 nm to 17.0 ± 1.0 nm, as the surfactant concentration increased; from SOR 5:1 to 7:1. This could be attributed to the reduction in the interfacial tension resulting from the adsorption of the surfactant molecules on the surface of the oil. The surfactant coverage was adequate to prevent oil droplets from coming close to each other, and thus no coalescence or phase separation was observed in the formulations [40,41]. However, a further increase in surfactant concentration resulted in a significant increase in droplet size to 68.8 ± 4.2 nm, at SOR 10:1. A similar behavior was also observed in beeswax formulations, where the droplet size decreased with an initial increase in the surfactant concentration and increased again (p < 0.05) with a further increase in the concentration of the surfactant. This is probably because the amount of surfactant was high enough to initially achieve complete coverage of the oil droplet, along with the presence of excess free surfactant molecules. Excess surfactant molecules might then form aggregates in the continuous phase, which reduces the surfactant At lower oil concentrations (0.15 g), droplet sizes decreased significantly (p < 0.05); 129.1 ± 28.4 nm, 69.4 ± 56.9 nm and 18.8 ± 0.5 nm, with an increase in surfactant concentration, as the surfactant-to-oil ratio was 5:1, 7:1, and 10:1, respectively. This was noticeable when oleic acid was the lipid used and could be due to the surface activity of the surfactant and its solubilization capacity. Surfactants decreased the interfacial tension between the oil and water phases, thus decreasing the amount of free energy required to deform or disrupt the droplets, which resulted in a smaller droplet diameter. They might also form a protective coating around the droplets and prevent them from coalescing one another. However, it is important for the emulsifier molecules to adsorb rapidly enough around the droplets, in order to form this protective interfacial layer [34]. At an SOR of 10:1, the droplet size was the smallest droplet size of 18.8 nm. This might indicate that a monolayer of surfactant was surrounding the oil droplets, taking into account that the length of hydrophilic chain for Tween 80 was 3.8 nm, as stated by Shukla et al. [23,39].
In coconut oil formulations, the droplet size showed an initial decrease (p < 0.05) from 112.5 ± 34.2 nm to 17.0 ± 1.0 nm, as the surfactant concentration increased; from SOR 5:1 to 7:1. This could be attributed to the reduction in the interfacial tension resulting from the adsorption of the surfactant molecules on the surface of the oil. The surfactant coverage was adequate to prevent oil droplets from coming close to each other, and thus no coalescence or phase separation was observed in the formulations [40,41]. However, a further increase in surfactant concentration resulted in a significant increase in droplet size to 68.8 ± 4.2 nm, at SOR 10:1. A similar behavior was also observed in beeswax formulations, where the droplet size decreased with an initial increase in the surfactant concentration and increased again (p < 0.05) with a further increase in the concentration of the surfactant. This is probably because the amount of surfactant was high enough to initially achieve complete coverage of the oil droplet, along with the presence of excess free surfactant molecules. Excess surfactant molecules might then form aggregates in the continuous phase, which reduces the surfactant concentration available to cover the oil phase, and as a result it would lead to an increase in the droplet size of NEs [40,42]. The presence of excess surfactant could also induce depletion flocculation, as reported by McClements [34]. In addition, the results suggested that there was an optimal concentration of surfactant to formulate a nanoemulsions [33]. This might also indicate that at higher surfactant concentrations, the droplet size was limited by the shear disruptive forces produced by the ultrasound rather than by the amount of surfactant present [43].
At higher oil concentrations (0.3 g), the oleic acid formulation droplet size decreased significantly by 324.1 ± 51.0, 134.3 ± 8.2, and 106.1 ± 2.8 nm, with an increasing surfactant concentration at a surfactant-to-oil ratio 5:2, 7:2, and 10:2, respectively. With coconut oil being the oil used in the formulation, droplet size decreased; 108.9 ± 40.8, 26.5 ± 24.5, 16.7 ± 0.6 nm, with an increased surfactant concentration, with the ratios being 5:2, 7:2, and 10:2, respectively. This similar behavior was also explained by the effect of the surfactant, which adsorbs onto the water-oil interface to reduce interfacial tension, causing droplet disruption and subsequent reduction in droplet size.
Even with an increase in the amount of oil added, with beeswax used as lipid in the formulation, there was still an initial droplet size decrease, followed by an increase. The decrease in droplet size was caused by enough surfactant coverage of oil droplet, while the resulting increase in the droplet size was possibly due to the aggregation of the excess surfactant. Excess of surfactant above the CMC resulted in the formation of micelles with a relatively constant concentration of monomer [34]. These micelles have fairly well-defined average size and shape, under a specified set of conditions and above the CMC, their number appears to increase rather than its size and shape [34].
However, the different concentrations of the surfactant used were high enough to prevent coalescence or any other form of instability in all formulations, where no phase separation was observed.
Effect of Oil Concentration on Particle Size
The increase in the concentration of the oil phase at a specific surfactant concentration also had an effect on the droplet size. The excess amount of oil caused the size of the emulsion droplets to increase [40].
The expected increase in droplet size was detected by particle size measurement, as a result of the increase in the dispersed phase [23]. This behavior was evident in oleic acid formulations. When comparing the surfactant-to-oil ratios, 5:1 and 5:2, it was noted that the droplet size increased significantly from 129.2 ± 28.4 to 324.1 ± 51.0 nm. At ratios 7:1 and 7:2, the droplet size increased (p < 0.05) from 89.2 ± 66.0 to 134.3 ± 8.2 nm, and at ratios 10:1 and 10:2, the droplet size changed from 18.8 ± 0.5 to 123.5 ± 39.2 nm significantly.
However, the rise in oil phase concentration did not have any significant effect (p > 0.05) on the droplet size in coconut oil formulations. The change was observed only at a higher surfactant concentration, with a change in SOR from 10:1 to 10:2, which showed a significant droplet size decrease (p < 0.05) from 68.8 ± 4.2 to 16.74 ± 0.60 nm.
On the other hand, beeswax formulations showed an increase in the droplet size only at a lower surfactant concentration. At an SOR of 5:1, the droplet size was 101.6 ± 26.2 nm, while it was 135.9 ± 4.8 nm at an SOR of 5:2 (p < 0.05). At a higher surfactant concentration, the concentration of oil did not seem to have a significant effect on the droplet size as it increased (p > 0.05).
Effect of Oil Type
At a specific surfactant-to-oil ratio, changing the lipid used in the formulation had an effect on the droplet size. However, beeswax and coconut oil formulations showed a close relationship in particle size. There was only a significant difference (p < 0.05) in the droplet size at a higher surfactant concentration. At the ratio of 5:1, all formulations showed similar droplet sizes, whereas at 7:1, both coconut oil and beeswax formulations had a smaller droplet size (p < 0.05) than oleic acid formulation. At 10:1, there were significantly different droplet sizes (p < 0.05) in which beeswax displayed the highest, followed by coconut oil and then oleic acid; 109.7 ± 54.1, 68.8 ± 4.2 and 18.8 ± 0.5 nm, respectively.
Oleic acid nanoemulsions had the highest particle size in most of the formulations, relative to formulations prepared using other lipids. This could be due to the solubility of lidocaine in oleic acid, which was also the highest. As described earlier, lidocaine was found to be highly soluble in oleic acid, which increased the amount of drug in the oil phase to be solubilized and thus increased the size of the droplets. Shukla also reported the same effect on the particle size and was attributed to the angular structure of oleic acid that could cause larger particles [39]. Leung and Shah [33] found that increasing the oil chain length led to less penetration of oil molecules into the interfacial film, resulting in larger particles, since attractive steric forces predominate. They also concluded that long chain oil is a poor solvent for interfacial film.
Oils with a high concentration of polar compounds were reported to reduce the interfacial tension and facilitate droplet disruption during high pressure homogenization [44]. Oleic acid was considered to be the main non-polar fatty acid in lipids [45]. In this sense, oleic acid was less likely to be solubilized in the aqueous phase, resulting in larger particles [19].
Beeswax and coconut oil nanoemulsions showed smaller droplets relative to oleic acid NEs. Wax esters accounted for 70% of beeswax [31]. These components were fatty acids that were esterified to a fatty acid alcohol, mainly palmitate, palmitoleate, hydroxypalmitate, and oleate [46]. These compounds provided high polarity to beeswax, compared to less polar oleic acid. This would lead to a decrease in the interfacial tension resulting in a smaller particle size.
Coconut oil was mostly composed of lauric acid, accounting for 40% of its constituents, as reported by Rizza's group [32]. Coconut oil showed the lowest measured contact angle at various cooling temperatures, compared to Jatropha curcas oil and sunflower oil, which are rich in oleic acid and linoleic acid, respectively. This was due to the high polarity of lauric acid relative to oleic acid, which had a weak polarity. In addition, lauric acid had low dipole-generated interactions, resulting from the movement of electrons, which results in low interactions between lauric acid molecules. This reduced the viscosity and thus achieved a low contact angle. [32]. It was also observed that the higher the viscosity of the oil phase, the higher the droplet size, and the more the energy required to disrupt the oil droplets [19,43]. Moreover, the straight chain of lauric acid might also explain the smaller droplets observed in this study with the use of coconut oil.
Effect of the Drug on Particle Size
Upon formulating the drug-loaded nanoemulsion, lidocaine was added to the oil phase, which was then mixed with the water phase, to produce the final dispersion. It was noticed that the addition of drug influenced the behavior of the blank formulation. Generally, lidocaine was found to impart a form of stability to the final formulation. Most of the blank formulations that were as-prepared were considered turbid. While all formulations produced upon the addition of lidocaine were clear nanoemulsions. This could be attributed to the inherent properties of lidocaine. Due to its chemical structure, it was suggested that lidocaine had a surfactant effect. Lidocaine and other anesthetics were identified as amphiphilic in nature. Similar to the surfactants, at a certain concentration they appeared to form micelles; CMC. Uesono et al. showed this surfactant behavior, in which lidocaine and other anesthetics were comparable with traditional surfactants of their ability, to generate an emulsified formulation [47]. Sadurní group also demonstrated an enhancement in the stability of nanoemulsions with addition of lidocaine, as its chemical structure consists of a hydrocarbon chain, an aromatic ring, and an amide group, which imparts the amphiphilic behavior [48]. The effect of lidocaine was explained by Yuan et al. by the fact that lidocaine is polar and it is this polarity that enables lidocaine to interact with the surfactant and the interface linkers, which would increase the hydrophilicity of the oil phase and thus increase the nanoemulsion stabilization [49].
With regard to the particle size, 50% of the formulations exhibited an increase in the droplet size, upon the addition of lidocaine, relative to blank formulations, as shown in Figure 5. This was not, however, reflected on the physical appearance of the formulation in which they exhibited a continued state of clear stable nanoemulsion, whereas most of the blank formulations showed a turbid appearance. At lower oil concentrations, with a surfactant-to-oil ratio 5:1, all formulations showed a significant increase (p < 0.05) in the droplet size. However, a droplet size decrease was observed at 7:1, except for the oleic acid formulation, where no change was observed. At 10:1, with the exception of oleic acid whose droplet size decreased significantly (p < 0.05), beeswax and coconut oil formulations showed an increase in the droplet size.
however, reflected on the physical appearance of the formulation in which they exhibited a continued state of clear stable nanoemulsion, whereas most of the blank formulations showed a turbid appearance. At lower oil concentrations, with a surfactant-to-oil ratio 5:1, all formulations showed a significant increase (p < 0.05) in the droplet size. However, a droplet size decrease was observed at 7:1, except for the oleic acid formulation, where no change was observed. At 10:1, with the exception of oleic acid whose droplet size decreased significantly (p < 0.05), beeswax and coconut oil formulations showed an increase in the droplet size.
Polydispersity Index
During the measurement of the particle size, the distribution of the particles in the samples was also measured. The formulated nanoemulsions were found to be of a polydisperse nature, as shown in Figure 6a-c, in which the PDI values fell in the range of 20-30%. It was predicted by Eriksson and Ljunggren [50] using the multiple chemical equilibrium approach that stable microemulsion polydispersities should be in the range of 10-45%. This is based on the assumption that these systems have droplets that are viewed as loosely bonded complex rather than small droplets in the strict sense of the word [50]. This behavior was also reported by other groups and no phase separation was observed [23,39,51,52]. One explanation for high PDIs is the formation of a bimodal distribution, with one population of small droplets around 15.3, 15, and 17 nm for nanoemulsions, formulated using beeswax, coconut oil, and oleic acid, respectively, and another population of large droplets around 465, 517, and 534 nm, respectively. This was similar to the data reported by Mayer et al. [53]. Another explanation for the larger PDI was the overestimation of the cumulant analysis, which represents a small correction to the shape of the correlation function [23]. The mean particle diameter measurements of d32 and d43 showed that almost all NEs could be produced with the majority of small droplets in the range of 15-20 nm (Supplementary material Table S1). On the other hand, the full particle size distribution measurements indicated that some droplet aggregation had occurred, resulting in bimodal distribution. In view of the relatively high concentration of alginate and high
Polydispersity Index
During the measurement of the particle size, the distribution of the particles in the samples was also measured. The formulated nanoemulsions were found to be of a polydisperse nature, as shown in Figure 6a-c, in which the PDI values fell in the range of 20-30%. It was predicted by Eriksson and Ljunggren [50] using the multiple chemical equilibrium approach that stable microemulsion polydispersities should be in the range of 10-45%. This is based on the assumption that these systems have droplets that are viewed as loosely bonded complex rather than small droplets in the strict sense of the word [50]. This behavior was also reported by other groups and no phase separation was observed [23,39,51,52]. One explanation for high PDIs is the formation of a bimodal distribution, with one population of small droplets around 15.3, 15, and 17 nm for nanoemulsions, formulated using beeswax, coconut oil, and oleic acid, respectively, and another population of large droplets around 465, 517, and 534 nm, respectively. This was similar to the data reported by Mayer et al. [53]. Another explanation for the larger PDI was the overestimation of the cumulant analysis, which represents a small correction to the shape of the correlation function [23]. The mean particle diameter measurements of d 32 and d 43 showed that almost all NEs could be produced with the majority of small droplets in the range of 15-20 nm (Supplementary Material Table S1). On the other hand, the full particle size distribution measurements indicated that some droplet aggregation had occurred, resulting in bimodal distribution. In view of the relatively high concentration of alginate and high zeta potential of the above -60 mV, it could be assumed that high electrostatic and steric repulsion between NEs droplets could occur. Mun et al. [16] postulated that a high concentration of alginate could induce a considerable depletion attraction between the NEs droplets that led to some droplet flocculation (depletion flocculation) during the preparation. Attractive depletion interaction occurred Pharmaceutics 2020, 12, 1223 14 of 21 between the emulsion droplets, when they were surrounded by small nonadsorbing colloidal particles, such as surfactant micelles, polymers, or nanoparticles [34].
Zeta Potential
Zeta potential is considered an effective way to describe the surface potential of the suspended As discussed earlier, the formation of bimodal distribution could be considered to be the main reason for a high polydispersity index. The presence of alginate in the colloidal systems was proposed to be responsible for the multimodal distribution and high PDIs of nanoemulsions [14,53,54]. Artiga-Artigas et al. studied the effect of sodium alginate incorporation on the particle size distribution of nanoemulsions [14]. They observed a multimodal particle distribution and it was attributed either to unadsorbed surfactant micelles at the oil droplets interface, which were repelled due the presence of excess alginate molecules or due to alginate aggregates. The use of ultrasound was also reported to give multimodal distribution at any amplitude or power, as reported by the Salvia-Trujillo group [54]. However, the use of ultrasound was justified by its ability to disrupt larger droplets and to stabilize the nanoemulsion by creating smaller ones. This was confirmed by the transparent appearance of nanoemulsions and the lack of phase separation in our study. Ultrasound exerts its physical impact through cavitation, which can induce polymer depolymerization. This could further improve the stability of nanoemulsions by reducing the steric impediment of alginate polymer chains during their disposition around the oil droplets [55]. Thus, if the system was not subjected to high shear stress, this could result in the formation of polymer aggregates [14].
Khorasani and Pourmahdian investigated the synthesis of hydrogel nanoparticles through the inverse microemulsion polymerization method and reported that the use of higher amounts of water upon dilution at constant concentration of Tween 80 would lead to the expansion of the continuous phase and significant increase in the interfacial surface area. As a result, the surfactant was no longer able to sustain nanoemulsion stability without changing the particle size and thus the PDI would increase [52].
Another possibility of higher PDIs might be due to the homogenization used to emulsify the water and oil phases [54]. During homogenization, eddies are formed and the fluid around these regions is disrupted and deformed. Normally, eddies of different sizes are formed in the fluid, in which large-sized eddies produce shear stresses that are not very effective in deforming the droplets. On the other hand, small-sized eddies produce high shear stresses that are dissipated in the fluid medium. Only medium-sized eddies are effective in disrupting the droplets. Thus, due to the different size of the eddies present, a polydisperse system is more likely to be formed. In addition, the size of the droplets is determined by the length of time spent in the homogenizer disruption region, which contributes to the creation of a polydisperse system [34]. It was also proposed that both alginate and Tween 80 could adsorb on the lipid surface, leading to the formation of a complex interface that is reflected on the particle size distribution [54]. It was postulated that the driving force of alginate adsorption to the NEs droplets was electrostatic in nature [16].
Zeta Potential
Zeta potential is considered an effective way to describe the surface potential of the suspended droplets. Thus, the electrical properties of the nanoemulsion formulation were measured through obtaining zeta potential values. All formulations showed a negative charge greater than −60 mV, reaching a relatively constant value of between −70 to −80 mV, suggesting that the emulsion droplets reached saturation with alginate rather than Tween 80 [16], as shown in Supplementary Material Figure S1. Zeta potential was measured in triplicates for each formulation and the mean value is shown in Table 2. The measured zeta potential reflected the observed stability of the lidocaine formulations, in which the physical appearance was a clear nanoemulsion with no signs of instability, such as creaming. Generally, a zeta potential greater than ±30 mV is considered adequate to ensure the physical stability of nanoemulsion [56]. High zeta potential value ensures stability because a charge that is sufficiently large, can prevent the aggregation of the droplets due to electrostatic repulsion between the droplets. The nanoemulsions were found to have a negative charge, because the droplets might have an electrical charge that depends on the types of ionizable molecules present and the pH of the aqueous phase [34]. The charge is explained by the rise in the concentration of Tween 80, which would lead to a reduction in the zeta potential. This occurs because an increase in the concentration of the surfactant above a critical value results in the sudden expulsion of OH-groups from the o/w surface, which can reduce the surface potential and thus the potential for zeta [57]. However, it was reported that Tween 80 was responsible for a slight increase in the negative charge on the oil-water interface [19,58]. Therefore, alginate could be considered the reason for the high zeta potentials in this work. Alginate has carboxylate and hydroxyl functional groups that are easily deprotonated at neutral pHs [14,59]. Furthermore, the application of shear stress, such as ultrasonication, can break up or modify the alginates chain and release more number of free molecules that can be potentially adsorbed on the oil-water interface [60]. As a result, more deprotonated groups around the oil droplets would be deposited and high zeta would be produced, which would be able to stabilize nanoemulsions by preventing re-coalescence [14]. This could support the electrostatic effects on the physical stability of nanoemulsions. Zeta potential data also indicates that the concentration of alginate in NEs was optimal to achieve high net negative charges, which could enable droplets to repel each other electrostatically [16].
Stability Study of Lidocaine NEs
The long-term stability of the nanoemulsions is characterized by both physical observation and measurement of the droplet size, over the entire storage period. On physical inspection, it was shown that the formulation retained its transparent appearance without any signs of creaming or phase separation. The NEs in this study were stable, compared to the one produced by Machado [61], in which phase separation occurred several hours after the preparation. The presence of a double layer of surfactant and polymer around oil droplets was reported to minimize the creaming rate, by controlling the net density of the droplets and bringing it closer to that of the surrounding aqueous phase [16]. The transparency of the formulation indicated a small droplet size that was further illustrated by the particle size measurement. Figure 7a-c display the droplet size measurements of various NEs prepared; it was found that all formulations had a droplet size less than 150 nm. After 30 weeks, 44.4% of the formulations experienced a droplet size decrease, while 55.6% of the formulations were found to have an increased droplet size. The increase in droplet size could be ascribed to the Ostwald ripening, in which droplets of smaller sizes tend to diffuse into larger droplets, due to their higher chemical potential. This was reported to be the most common mechanism for destabilizing nanoemulsions [48,62].
Conclusions
In conclusion, a stable lidocaine nanoemulsion was successfully formulated using a combination of high-and low-energy methods; ultrasonication and phase inversion temperature, respectively. The However, even with the measured increase in the droplet size, it remained below 150 nm, with the exception of coconut oil and oleic acid formulations at an SOR of 5:2. However, the increase was not reflected on the physical appearance of the formulations, which as mentioned earlier, remained as clear and transparent as the fresh formulations.
To differentiate between the different mechanisms of instability, the cubes of the average radius r 3 of emulsions were plotted against time, in which a linear relationship was evidence of Ostwald ripening. The Lifshitz-Slezov and Wagner (LSW) theory [48] describes the rate of Ostwald ripening, Equation (7) is as follows: where C∞ is the bulk phase solubility (the solubility of the oil in an infinitely large droplet), γ is the interfacial tension, Vm is the molar volume of the oil, D is the diffusion coefficient of the oil in the continuous phase, ρ is the density of the oil, R is the gas constant, and T is the absolute temperature.
On the other hand, it was suggested that, when a linear relationship is obtained by plotting 1/r 2 against time, it should imply coalescence [48,62]. No such linear relationship was obtained, which might be because neither of these mechanisms is predominant and the two breakdown processes can occur concurrently in this system, as suggested by Sadurní et al. [48].
Conclusions
In conclusion, a stable lidocaine nanoemulsion was successfully formulated using a combination of high-and low-energy methods; ultrasonication and phase inversion temperature, respectively. The method used produced nanoemulsions that maintained nano-sized droplets <50 nm over long-term storage. Nanoemulsion formulated using Tween 80 as a surfactant at varying surfactant concentrations and using various lipids in the oil phase, oleic acid, beeswax, and coconut oil. The use of lidocaine in the formulation was shown to impart a degree of stability to the formulation due to its relative amphiphilic properties.
It was found that an increase in oil concentration contributed to an increase in the size of the droplet. Increasing surfactant concentration, on the other hand, was shown to decrease the droplet size, as it reduced the interfacial tension and provided a protective cover for the droplets. This effect was observed until the surfactant concentration reached a limit, after which the droplet size increased due to the aggregates that could form from the excess of the surfactant. Lipid is also shown to have an effect on the droplet size that is associated with the drug solubility. The lipid that showed higher drug solubility also showed a higher droplet size. Nanoemulsion formulation was proven to be a promising approach to encapsulating the active pharmaceutical ingredient, lidocaine, to a high extent. | 2020-12-23T06:16:38.351Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "31c0b234afc849220b55b44aabf7e105a40e7cbb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/12/12/1223/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bf9634d2ae24d4064767505f4f6c5eb042e9d7d2",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
146662947 | pes2o/s2orc | v3-fos-license | Characteristics of Atheist Pre-Service Elementary Teachers
Introduction More than 1/3 of young adults claim no religious affiliation, and the number of Americans who are distinctly nonreligious is growing (PEW, 2015). Emerging research on the nonreligious has provided insights into this minority population that has been largely ignored, marginalized, and even reviled in the US. Nearly half of Americans think that the growing number of people who are nonreligious is bad for society (PEW, 2013). Nonreligious Americans span all geographic regions, age groups, and occupations, including teaching. Tens of thousands of atheists are teaching in public school classrooms today, despite an alarming number of people who object to the notion. In one survey, only 59% of Americans expressed agreement that an atheist should be allowed to teach high school (Baylor, 2005). Given the increasing number of nonbelievers in the US, coupled with the rapid rise in the number of Millennial teachers due to Baby Boomers retiring, it is imperative that we learn more about the beliefs and practices of nonreligious teachers, a group about which essentially no research has been conducted. This study sought to learn more about the beliefs and self-identities of nonreligious pre-service elementary teachers, as well as the role their lack of faith plays in their teaching.
Introduction
More than 1/3 of young adults claim no religious affiliation, and the number of Americans who are distinctly nonreligious is growing . Emerging research on the nonreligious has provided insights into this minority population that has been largely ignored, marginalized, and even reviled in the US. Nearly half of Americans think that the growing number of people who are nonreligious is bad for society (PEW, 2013). Nonreligious Americans span all geographic regions, age groups, and occupations, including teaching. Tens of thousands of atheists are teaching in public school classrooms today, despite an alarming number of people who object to the notion. In one survey, only 59% of Americans expressed agreement that an atheist should be allowed to teach high school (Baylor, 2005).
Given the increasing number of nonbelievers in the US, coupled with the rapid rise in the number of Millennial teachers due to Baby Boomers retiring, it is imperative that we learn more about the beliefs and practices of nonreligious teachers, a group about which essentially no research has been conducted. This study sought to learn more about the beliefs and self-identities of nonreligious pre-service elementary teachers, as well as the role their lack of faith plays in their teaching.
Review of Literature Terminology
Categorizing the nonreligious is tricky. Identification usually results from self-declaration, which can lead to conflicting paradigms. Lee (2012) argued that existing terminology in the field of non-religion has been "used inconsistently, imprecisely, and often illogically" (p. 129). Some of Day's (2009) participants claimed that they were not religious because they did not identify with a church, yet they believed in some version of a supernatural force. Others identified with a specific religion even though they did not believe in a god. In England, for example, a majority of young people identified as Christians for purposes of familial and social relationships, despite their lack of belief in a god.
Much attention has been given lately to the rise (from 16.1% to 22.8%) in religiously unaffiliated Americans over the past seven years; however, only 7.1% declared that they were Atheist (3.1%) or Agnostic (4%) . Americans under 30 are more than twice as likely (35% vs 17%) than Baby Boomers to declare their lack of religious affiliation. Most of the "nones" who have been garnering recent media attention claimed that they were "nothing in particular" rather than declaring that they were "atheist" or "agnostic" outright. Only 31% of those classified as "nothing in particular" self-identified as atheist or agnostic .
In this study, I chose to include only those pre-service teachers (PSTs) who self-identified as atheist or agnostic. I left out PSTs who wrote that they were "spiritual" or any other derivation of "other". The term "atheism" designates a lack of belief in a god(s) (Smith, 1979;Bramlett, 2012); whereas, agnosticism refers to a lack of certainty over whether god exists (Miovic, 2004;Bramlett, 2012). Yet, because atheism and agnosticism are often used interchangeably to refer to people who lack belief in god(s), I use the abbreviation, "A/A". Throughout this paper, A/A, nonreligious, and nonbelievers are used synonymously.
Characteristics of Atheist Pre-Service Elementary Teachers
Derek Anderson * The purpose of this study was to investigate the beliefs and self-identities of 15 nonreligious pre-service elementary teachers (PSTs), as well as the role their lack of faith plays in their teaching. Semi-structured interviews and participant self-analyses served as data sources, which were used to categorize the PSTs into Silver's (2013) typology of nonbelief. Notably different from the distribution of Silver's national sample of nonbelievers, the nonreligious PSTs were much less engaged in their nonbelief and were more willing to comply with religious activities. The nonreligious PSTs were drawn to teaching due to their love of learning, and they intend to promote critical thinking and tolerance in their classrooms; however, 13 (87%) wof the PSTs expressed concern about the negative impact their lack of religion might have on their careers and intend to keep their lack of belief private.
Public Perception of Atheists/Agnostics
Americans report greater disregard for atheists than they do toward any other religious, ethnic, or racial group (Cragun, et al., 2012;Edgell, Gerteis, & Hartmann, 2006). More than 60% of Americans expressed that atheists negatively influence society (Fitzgerald, 2003, as cited in Bramlett, 2012, and nearly half of Americans would disapprove of their child marrying an atheist (Edgell et al., 2006). Americans are less likely to vote for a hypothetical atheist than a Muslim or gay candidate, with 43% stating that they would not vote for an otherwise qualified candidate if s/ he were an atheist (Jones, 2012). Politically, it is perilous for an A/A candidate. Of the 535 members of US Congress currently, there are no admitted Atheists or Agnostics. What's more, there are still laws in several states barring atheists from holding public office (Cimino & Smith, 2007). Atheists and Agnostics are still barred from participating as Scouts or Scout Leaders, though the Boy Scouts of America dropped its longstanding ban on homosexuals in May 2013 (www.scouting.org). Furthermore, courts have a consistent record of denying custody of children to A/A parents expressly because of their lack of religious belief (Cline, 2006).
Beyond the overt discrimination in the US, A/As report subtle discrimination, which is correlated with the extent to which the person was "out" or public about her atheism (Hammer, Cragun, Hwang, & Smith, 2012). Atheists commonly experience slander (both personally and in the media), coercion (pressure to perform religious behaviors or risk social consequences), and social ostracism (Cragun et al., 2012).
Religion in Public Schools
Corresponding with their affinity for religion generally, Americans overwhelmingly support the presence of religion in our public schools. Despite 50 years' worth of case law that has declared school prayer unconstitutional, 61% of Americans think daily prayer should be allowed in classrooms, and 75% support prayers as part of official school programs (Riffkin, 2014). Once again, however, it appears that Americans' perspectives are divided among generational lines, with 56% of people under 30 supporting the Supreme Court's ban on school prayer compared with 30% of Americans older than 60 (PEW, 2012). A/A children and teachers are susceptible to discrimination since most US schools tend to be Christian-centric (Hartwick, 2007;Ribak-Rosenthal & Kane, 1999). Recently, Michigan (the state in which this study took place) passed a law requiring all schools to lead students in the recitation of the Pledge of Allegiance. Though seemingly minor, the daily choral recitation of the words "under God" is a clear reminder that A/As are different from the vast majority of American students and teachers (Laycock, 2004).
American Teachers and Atheism/Agnosticism
America's shifting demographics certainly has implications for its teacher workforce, which has been graying since the 1980s. In 1988 the average age of a US teacher was 41; in 2008 it was 55 (Ingersoll & Merrill, 2010). Recently, however, the US has seen a "greening" of its teacher workforce on the heels of a mass retirement of Baby Boomers. In 2011, only 31% of teachers were aged 50+, with 22% of teachers younger than 30 years old (Feistritzer, 2011). With 3.2 million teachers in the US, more than 7 million are under 30 years old.
Though we do not have clear data on the religious affiliations of US teachers, it appears that teachers are more religious than the general population (Slater, 2008). Research on college students revealed that Education majors are the most religious students on campus (Kimball, Mitchell, Thorton, & Young-Demarco, 2009). What's more, Education majors tend to become more religious throughout their college education (Kimball et al., 2009). The vast majority (84%) of teachers in the US are female (Feistritzer, 2011), and females are half as likely as males to self-identify as atheist or agnostic (PEW, 2012). Still, given the overall national trend regarding religion, it is safe to assume that younger teachers are more likely to be A/A than older teachers.
Despite the increasing number of A/A teachers, we know very little about them. Much has been written about the lives of teachers who are religious (see Hartwick, 2007Hartwick, , 2009Hartwick, , 2012, but there are only a few published studies on nonreligious teachers. This paper stems from a previous study (Author, 2014) of four A/A pre-service elementary teachers (out of a class of 22) who planned, taught, and reflected on a world religions field experience with 7 th -grade students, as part of a program requirement during their semester prior to student teaching.
In this paper, I sought to learn about the identities, beliefs, and intentions of 15 nonreligious elementary PSTs from four cohorts. Unlike my previous work on how the PSTs planned and taught lessons, this study aimed to learn more about the self-declared A/A PSTs, including their world views, their school experiences and decisions to become teachers, how their nonbelief relates to their teaching experiences, and their projection of how their nonbelief will impact their future teaching practice. Using Silver's (2013) typology of non-belief, I used semistructured interviews and participants' journals to categorize the 15 PSTs into six "types" of nonbelievers.
In an attempt to delineate types of nonbelievers beyond merely agnostics and atheists, Silver (2013) developed a typology of six categories, which include: Intellectual A/A (IAA)proactively seek to educate themselves through intellectual association; Activist (AAA)socially active; proactive and vocal about current issues; Seeker-Agnostic (SA)recognize limitations of human knowledge and experience; open to possibilities of metaphysical existence; Anti-Theistassertively and diametrically opposed to religious ideology; "new atheists"; Non-Theistapathetic or disinterested in religion; Ritual A/A (RAA)hold no belief in god(s) but find utility in some traditions, rituals, or the teachings of some religious traditions.
Methods
This study used predominantly qualitative and ethnographical methods (Cresswell, 1998;Yin, 2011) to categorize the 15 nonreligious PSTs into Silver's (2013) typology of nonbelief and to seek an understanding of the role of nonreligion in their current and future teaching practices. The undergraduate Elementary Education PSTs, from an approximately 9,000-student public university in Michigan, were in their final semester of their teacher preparation program, the student-teaching practicum. Predominately Caucasian, female (87%), and in their early-to mid-20s, the initial participants came from four cohorts of students over four semesters, 94 in total, of whom 74 (78.7 %) self-identified as Christian (37 Roman Catholic and 37 Protestants), 15 (16%) as atheist/agnostic, and 5 (5.3%) as Other. This study focused on the 15 PSTs who self-identified as Agnostic or Atheist. The two PSTs who wrote "spiritual" under the Other category, and the one PST who wrote, "Unitarian Universalist" were excluded.
All 94 PSTs also took The Santa Clara Strength of Religious Faith Questionnaire (SCSRFQ), a 10-question self-report measure designed to assess respondents' strength of religious faith (Plante, 2010;Plante & Boccaccini, 1997; see Table 1). The SCSRFQ, a reliable and validated assessment useful for multiple religions, uses a four-point Likert-like scale, resulting in a range of possible scores from 10 (no faith) to 40 (strong faith) (Freiheit, Sonstegard, Schmitt, & Vye, 2006). Scores under 26 are to be considered low faith, and scores 26 and over are to be considered high faith (Plante, 2010). Although psychometrically sound and commonly used in social science research due to its nondenominational structure, the SCSRFQ is not an optimum instrument for the nonreligious. Six of the 10 questions refer to "my faith," which could be problematic for nonbelievers who interpret that to mean "my lack of faith". For example, one of the stems states, "I enjoy being around others who share my faith." A nonbeliever might strongly agree that she enjoys being around other nonbelievers.
Using a semi-structured protocol, I interviewed the 15 nonreligious PSTs. The length of the interviews varied greatly (between 30 and 90 minutes), depending on how much each PST chose to share. At the end of each interview, I gave the PST the descriptions of Silver's (2013) six types of nonbelief and asked them to email me with their selfanalysis of the type with which they most identify, as well as their reflections on their self-identities as A/As. Using phenomenological principles (Creswell, 1998), I began coding my interview notes before determining which of Silver's categories each PST best fit. Then, using classical content analysis (Leech & Onwuegbuzie, 2008), I examined the interview transcripts and my notes to categorize the PSTs before reading their self-analyses and reflections. Phenomenology seeks to interpret each individual's experiences and perceptions without seeking causal explanations (Van Manen, 1990). Accordingly, I sought to capture the participants' lived experiences before attempting to draw conclusions or generalizations.
My categorizations of the PSTs' types of nonbelief matched the self-identifications of 13 (87%) of the PSTs. For the two PSTs whose self-identifications differed from my categorizations, I asked them to return for a second interview. Each of those follow-up interviews (one by phone) lasted approximately 15 minutes, during which time the PSTs and I reached consensus on which of Silver's categories fit them best. Furthermore, I continued to code the interview transcripts and reflections using constant comparison methods, searching for additional themes with supporting examples until the data reached a point of saturation (Campbell, Quincy, Osserman, & Pedersen, 2013;Dye, Schatz, Rosenberg, & Coleman, 2000).
Findings
With the lack of a dominant framework for examining nonbelievers, I selected Silver's Typology (2013), which allowed the PSTs and me to efficiently categorize each of the nonreligious PSTs into a distinct category. However, the distribution of the 1,123 nonreligious Americans in Silver's research did not match the distribution of the 15 PSTs in my study. The three most common types of PST nonbelievers (Seeker-Agnostic, Ritual Atheist/Agnostic, and Non-Theist, in order) were the three least common types in Silver's research. Whereas only 7.6% of Silver's participants identified as Seeker-Agnostics, 40% of the PSTs in my study were Seeker-Agnostics. Conversely, the most common type of nonbelievers in Silver's research were Intellectual Atheists/Agnostics at 37.6%, yet none of the PSTs identified as such. Similarly, the second-most common type of nonbelievers in Silver's research were Activists (23%), yet only one PST (6.7%) was an Activist Atheist/Agnostic.
Despite the 15 PSTs' clear lack of belief in a deity, they were notably less active in their nonbelief than Silver's participants. With nearly half of the PSTs being Nontheists (20%) or Ritual Atheists/Agnostics (26.7%), the PSTs could be considered relatively disinterested in secular activism. While 75% of Silver's participants were active in their opposition to religion (Intellectual Atheists/Agnostics, Activists, or Anti-Theists), only two (13.3%) of the PSTs were energetic about their atheism. See Table 2.
Pre-Service Teachers' Categorizations in Silver's Typology
My categorizations of the PSTs came primarily from the interview data and matched the the PSTs' self-appointed categories 87% of the time initially. Combined with their written self-reflections, their interview transcripts provided ample examples to justify the categorizations into Silver's typology, some of which I provide below.
Seeker-Agnostics
As mentioned above, six (40%) of the PSTs were Seeker-Agnostics. The phrase, "open mind" was repeatedly spoken by these PSTs. For example, one PST said, "I am not sure of the existence of God; however, I do keep an open mind about it all." Another PST said: "I have a curious personality, and have [an] open mind when it comes to religion, but I myself cannot say I necessarily believe in God or some other higher power." All six of these PSTs referred to themselves as agnostic. What's more, they all intentionally avoided the atheist label. Primarily, they avoided calling themselves atheists due to their perception that atheism connotes a certainty about there being no god. Even when presented with the definition that atheism is simply a lack of belief in god(s), they resisted the label. When pressed, these Seeker-Agnostic PSTs relied on circular logic to defend their agnosticism, such as: "I don't believe in a god, but I can't be sure that there is no god, so I can't say I believe in a lack of god."
Ritual Atheists/Agnostics
The four (26.7%) Ritual Atheists/Agnostics described how they liked to participate in religious events, despite their lack of belief in a supreme being. For example, one PST said, "I enjoy various events and activities like Christmas celebrations, but not because they are religious. I enjoy the ritual." Another PST commented: I find comfort in rituals and traditions. For example, at the camp I worked at we had a Native American Ceremony every Friday night. The Chief would come in and we would follow him to the ceremonial bowl where the "spirits" would share their wisdom with us and new campers would pledge to take care of the Earth. I think that is a great ritual.
Common among this group of PSTs was an appreciation for religious lessons and leaders. Another of the Ritual A/A PSTs remarked, "I enjoy reading teachings of many religious leaders. Although I don't identify with any religion, there are many things I have found that encourage me to evaluate myself and the world around me." Notably, these PSTs drew on the rituals and teachings of several religions. One stated: "I have found inspiration in the stories of Siddhartha, Jesus, and within the Bhagavad Gita. I think that religion is popular for a reason; the tales can offer hope and guidance."
Non-Theists
Three (20%) PSTs were classified as Non-theists because they reportedly did not think about religion or nonreligion much, if at all. One PST remarked, "I don't care about religion or atheism. We'll never know those answers, so why waste time thinking about it?" Another Non-Theist PST described how she does not need religion for a moral framework: As of right now there is no proof of a supreme being, and I am okay with that. I know I am a good person that tries to do good things for others, and I'm very happy simply living by the golden rule -I don't need religion in my life.
The other Non-theist PST acknowledged that she did not believe in a supreme being but refrained from talking about it: I am not in any way, shape, or form interested in pushing my disbelief onto others or starting a movement to push society in a direction towards religious disbelief. I don't do or say things with religion or lack thereof in mind. Religion or lack thereof really plays no part in my life on a day-to-day basis.
Activist
One PST was an Activist A/A who stated, "I feel very strongly about the separation of church and state, and am very outright in my beliefs that deal with feminism, LGBT issues, abortion, etc." She maintains a blog and shares articles with others via email but intentionally keeps her atheism activism off of Facebook because of the opposing beliefs of her friends and her father.
Anti-Theist
Finally, one of the 15 PSTs was an Anti-Theist, mainly because of her self-admitted arrogance about atheism and overall disdain for religion. She had tendencies associated with both Intellectual Atheists/Agnostics and Activists but fit more closely with the Anti-Theist category because of her contempt for people who hold beliefs about the supernatural. She expressed particular frustration with religious classmates who were going to be science teachers: How can someone who believes in creation be a science teacher? I don't get how people can pick and choose when they are going to follow logic and the laws of science and when they are going to willingly ignore them. Don't they see that holding onto archaic religious beliefs is slowing the advancement of science?
Overall, this PST saw religion as more harmful than good.
PSTs' Reflections on their Nonreligion
Most of the PSTs admitted that they were rather indifferent about their nonbelief. One of the Non-theist PSTs commented: "I've never really thought about categorizing myself as an atheist. I haven't spent a great deal of time considering what group I fit into; and honestly wasn't aware that there were different 'types' of atheists." Another PST, a Seeker-Agnostic, wrote: Although many of my friends are atheists, I have never thought of myself as being part of a group or subcategory so this took me a while to figure out. I don't see myself as being connected to other atheists somehow; it is just who I am.
Several PSTs explained how they were somewhat afraid to talk about their nonbelief. One Ritual A/APST remarked: I am rather reserved about the fact that I am an Atheist. Not that I am ashamed, but I think that society has created such a stigma and people often think you must be satanic or a hellion if you are an Atheist. I prefer to keep it private because, along with politics, I think religion or lack thereof is personal.
The PSTs seemed happy to participate in this research and were comfortable talking about their nonbelief, yet generally were not prone to discussing their nonbelief in daily life. As one Non-Theist PST said, "I just don't think about it very much and I seldom talk to anyone about it." Perhaps most representative of the PSTs' indifference were their responses to my interview question about Michigan's law requiring all public schools to lead students in recitation of the Pledge of Allegiance. Only one PST, the Activist, expressed any concern about inclusion of the phrase "under God." Most PSTs made comments like this one from a Seeker A/A: I think that the Pledge of Allegiance is an important part of our nation's history. Really, I just don't care. It is completely absurd to me that the word "God" simply spoken in school would offend or infuriate anyone. I mean, COME ON. I say the word "God" all the time simply out of habit. Oh my God, Jesus Christ, Good Lord, Please God . . . etc. are said on the daily by religious and nonreligious citizens.
The PSTs expressed a desire to "not make waves" and seemed quite bound by tradition, particularly with regard to school operations. Teachers are known to be change-averse (Fives & Buehl, 2012), and these PSTs were content to accept the status quo, even if it meant practices that opposed their religious beliefs. Most of the nonreligious PSTs had no problems with Christianity's presence in our public schools. One of the Nontheist PSTs remarked, "Like it or not, we live in a Christian country. Christian holidays dominate. I don't mind some Christian songs during the Christmas concert. It's more culture than religion." Another PST, a Seeker Agnostic, commented, "The community where I studenttaught is 99.99% White and Christian. I can't expect there not to be a strong Christian influence. It's just the way it is. It doesn't affect me."
Cultural importance
Nearly all the PSTs described how they think religion is important to society, even though they are not religious themselves. For example one Seeker A/A PST said, "I just think that culture is so important and religion is a part of that. It's not a part that I need to actively participate in but I understand the importance." One Ritual A/APST expressed how she thinks society needs religion: Organized religion gives people something more. It gives people the opportunity to be forgiven and forgive others, the opportunity to ask for help, the opportunity to share things they cannot share with a person. Organized religion is not something I am against. For some people it is the reason they are able to do good while here on earth. It gives people purpose and an "out" for when life just gets too hard. Humanity needs a base like that. I don't think(?) they could function without it.
Another PST, a Non-Theist opined: I have learned that people need their religions. Whether it's the social aspect of being in a community who has the same beliefs, or a 6 year old girl who feels awful about her day, which stemmed from her father dying of cancer and a mom who took off a long time ago, they need it to live a full life.
What's more, several of the PSTs revealed how they were jealous of people who believed in a god. The Activist PST remarked, "I have found myself, while growing up, wishing that I could just fake it to be part of an organized religion. However I found that it was too far out of my reach." Another PST, a Ritual A/A admitted: "It bothers me that I cannot just make myself believe because I want to be in a community and have a support system larger than those around me." With the exception of the Anti-Theist, all the PSTs felt that religion is mostly good.
The Role of their Nonreligion on their Teaching
Beyond categorizing the 15 PSTs and examining how they conceptualized their nonreligious beliefs in general, I sought to understand how the PTS' beliefs about nonreligion have impacted their teaching experiences thus far.
All 15 of the PSTs completed a field-intensive teacher education program and had completed, or were close to completing, their 16-week student teaching practicum at the time of the interviews.
Each of the PSTs had at least one story about how they encountered religion during their student teaching practicum. One PST described how her Kindergarten students wrote "god" when asked to come up with words that start with the letter "g". When she walked by one table of students, a little girl asked her, "God make everything, right, Ms. Smith?" to which she replied, "That's what a lot of people believe." The Seeker Agnostic PST explained how she was nervous about the interaction: Luckily, they were okay with that answer, but I was nervous that they would question me on what I believed. I didn't know how I would explain it without having angry parents. Looks like my beliefs are going to impact my teaching a little more than I thought.
Another PST, a Ritual Atheist, described how one of her students asked her if God was going to be mad at her for her bad behavior: I asked her if she thought he (God) would be. She said no, because he loves her no matter what. I agreed with her. She found peace after a day filled with turmoil. I am always going to support that peace within my students no matter what religion. Always.
The nonreligious PSTs seemed to have no trouble supporting the religious beliefs of their students, at least not during their student teaching practicum. The student teaching practicum, however, is in many ways different from the actual classroom. During student-teaching, PSTs are subordinate to the classroom teacher, their university supervisor, and the host school in general. The PSTs during student teaching were not able to establish their own classroom milieu like they, presumably, will be able to do when they have their own classrooms.
The Role of their Nonreligion on their Future Teaching
In addition to examining the PSTs' experiences with and perspectives on the intersection between their nonreligiosity and their student-teaching experiences, I also asked the 15 PSTs to consider how their beliefs might impact their upcoming teaching careers. All of the PSTs were intending to obtain a full-time teaching position for the following school year, making teaching their careers. I was curious whether the PSTs would project their future actions to be different from what they experienced during their student tea0ching practicum. Three themes emerged from this line of questioning: nondisclosure, tolerance, and critical thinking.
The PSTs' participation in this study was contingent upon confidentiality, and all but two of the PSTs expressed their plans to purposefully keep their nonreligion secret when they are teachers. For example, one PST, a Seeker Agnostic, shared: I am nervous about anyone finding out that I am an atheist elementary school teacher because I don't want my students and their parents to think less of me. I hope that the correlation between atheism and being a bad person goes away in my lifetime.
These PSTs were certain that their atheism/agnosticism would be a detriment to them, both in the job hunt and in their teaching careers. One of the Ritual Atheists explained how her lack of belief might negatively impact her: "I want to teach in this area, which is overwhelmingly Christian. Admitting that I don't believe in God could prevent me from getting hired. Admitting it really has no upside." The PSTs commonly described how they want to model tolerance and acceptance. One PST, a Non-theist, remarked: "If a student tells a story about Sunday School, you can tell them it was a nice story and thank them for sharing. Students need to see that you can have polite, supportive interactions when religion is the topic." The PSTs described how they want their students to feel safe and accepted, regardless of their beliefs. One Seeker-Agnostic PST explained, "Most kids believe what they believe because of their parents. I want my classroom to be a place where kids can ask any questions they want, give any opinion they want, without being ridiculed." While the PSTs were positive regarding their tolerance of their students, they expressed frustration with students who did not show tolerance of others. One PST, the Antitheist, stated: I feel myself getting so frustrated with students who are not open to learning about other religions. When I taught them in class, we are clearly not teaching them to choose another religion, and some students are taught by their parents that the only religion that is acceptable is their own, so they simply shut themselves off in the classroom. It was incredibly hard for me to teach them when they refused to listen because they were taught to be ignorant of other's beliefs, which is where so many problems arise.
Overall, the PSTs were quite positive and hopeful about their future goals of promoting tolerance. For example, one Seeker Agnostic PST stated: I think that my open-minded perspective will help make the most of the diversity and interests of my students, which will connect the students, and challenge them to expand their knowledge and viewpoints. I also hope my curiosity for the world around me will spread onto my students.
Likewise, the Activist PST said, "I think that the world would be a happier place if more people were open minded." The nonreligious teachers in this study articulated a clear sense of compassion for their students and felt that their lack of religiousness better equipped them to model and teach tolerance. For example, one of the Nontheists stated: The fact that I don't follow any religion increases my acceptance and value of different people and their many beliefs. I think there is something to be learned from all of the different religions and that through education we can promote acceptance, and not just tolerance. The fact that I do not follow one religion allows me to teach about religion and culture in a more objective way.
Several of the PSTs were adamant that they intend to teach their students to think critically and to seek evidence rather than to accept what they are told. For example, a Seeker Agnostic PST explained, "I use evidence and reason to view the world, which I think is a good thing. Why wouldn't I want my students to do the same?" The PSTs described how they want their students to "examine issues from all sides." Most of the PSTs noted that they wanted to respect what their students' parents taught them at home; however, the Activist PST and the Anti-Theist PST were explicit in their desire to help students "move beyond the dogma they are taught at home." Only these two PSTs described any intent related to subversive pedagogy. The rest of the nonreligious PSTs mentioned critical thinking, directly or indirectly, but stopped short of indicating that they intend to change their students' minds about their religious beliefs. Rather, most of the PSTs made comments along the lines of this one made by a Seeker-Agnostic PST: I really want to foster curiosity in my classroom. Curiosity leads to creative thinking which begins with a sense of wonder and mystery that motivates students to learn new ways to understand and express themselves. This will also cause students to seek diverse connections and build new relationships among ideas.
Most of the PSTs connected their own cognition and learning styles with how they intend to teach. Nearly all of these nonreligious PSTs shared how they did not like having to memorize information only to "regurgitate it back on the test." They preferred to learn through debate and discussion, exploring multiple perspectives on complex topics. Accordingly, the A/A PSTs described how they want to teach students to interpret, analyze, and synthesize multiple sources of information on each topic, particularly in social studies. Even in their plans for teaching math, these PSTs expressed an aversion to rigidity. One PST described how she hated having to "do problems the teacher's way" even though she could get the answers in her head an easier way.
Discussion
With the number of nonreligious Americans on the rise, it is imperative to learn more about the beliefs, intentions, and actions of this minority population, particularly since people's belief systems impact how they act (Hartwick, 2014). We know remarkably little about nonreligious teachers, which is problematic since teachers' beliefs "are likely to influence how they view their professional lives, likely impacting how they see and treat students, view knowledge, and what classroom resources they might use" (p. 4). In other words, teachers' beliefs on the supernatural may impact what and how they teach.
This study did not directly investigate how nonreligious teachers teach. Rather, I sought to learnthrough interviews, reflections, and the PSTs' self-categorizationsmore about the types of nonreligiousness of 15 PSTs, as well as how those different nonreligious teachers might approach the profession. Unlike the majority of teachers in Hartwick's (2014) research who expressed a calling by God to teach, none of the nonreligious teachers in this study expressed a compulsion or sense of mission to become a teacher. The nonreligious PSTs referred to their love of learning and wanting to share their passion with others, but none of the PSTs connected this directly to their lack of belief in the supernatural. For example, one of the Seeker Agnostics stated, "I am always learning. I love to admit that I don't know things and that I want to learn more. I want to teach that attitude to my students." The A/A PSTs expressed an affinity for critical thinking and debate, and intend to teach their students how to think critically; however, they did not go so far as to describe their desire to become teachers as a "calling." The ways the PSTs plan to approach their craft of teaching presented the most pronounced expression of the PSTs' nonreligiousness. It is also in this area that differences among the types of nonreligious PSTs were apparent. Recall that Silver's (2013) extensive analysis of nonreligious Americans, which lead to his creation of the six types of nonbelief, yielded distributions quite different from the 15 PSTs in this study. Certainly, the small sample size in this study is a limitation, but the differences are worth noting. Whereas in Silver's sample more than 60% of respondents were Intellectual A/A or Activist A/A, only one PST fell into either of those categories. It is important to note that although large, Silver's sample was not a representative sample of the nonreligious in the US. The Intellectual and Activist A/A categories, along with Anti-Theists, consist of people who are active in the "movement" and are public about their lack of religion. The PSTs in this study, however, were overwhelmingly inactive and private.
Though 13 of the 15 PSTs intend to remain "closeted" during their teaching careers, suggesting that little has changed since Nash (2003) wrote about the stigmatization of atheist college students, the PSTs' nonreligiousness will certainly play some role in their teaching, if only indirectly. Nonreligious teachers are more likely to present students with open-ended tasks and to use resources other than the textbook (Author, 2014;Hartwick, 2014). Albeit covertly, the nonreligious teachers plan to promote skepticism over certainty, and interpretation over dogma. Whereas White (2010) found Christian teachers more likely to select curricular resources written by overtly Christian authors (like C.S. Lewis), the ideas expressed by my nonreligous PST participants suggest they may be more likely to select resources written by skeptics. As White (2009) noted, "[T]eaching is not a neutral act" (p. 864). Silver's (2013) typology of nonbelief provided a useful framework for learning about different types of nonreligious teachers; however, the PSTs' difficulties with categorizing themselves illustrates the limitations of Silver's socially constructed typology. Silver's typology worked in this study to help me learn more about nonreligious teachers, but it also revealed the complexity and diversity surrounding nonbelief.
Nonetheless, from this limited sample, it appears that nonreligious White, female elementary pre-service teachers tend to exhibit similar characteristics that align them with just a few of Silver's categories. The PSTs in this study were predominantly Seeker Agnostics, Ritual Atheists, and Nontheists, who, compared to the other types (Intellectual Atheists/Agnostics, Activists, and Antitheists) in Silver's national sample, tend to be less strident in their nonbelief, even to the point of participating in and accommodating religious practices that are part of the dominant cultural norms.
Perhaps most importantly, the PST participants in my sample believe that public schools (i.e., government funded schools) are somewhat unfriendly places for nonreligious teachers. As a marginalized population (Author, 2014;Cragun et al., 2012;Edgel et al., 2006), nonbelievers still face educational and work environments where their worldviews are not fully accepted or accommodated, and the comments of my participants suggest they perceive their future workplaces to not be accepting or accommodating. Subedi (2006) warned that our Christian-centric schools create biases against students with non-traditional religious identities. Nonreligious students and teachers are often seen as disloyal and unpatriotic outsiders, which the PSTs in my study perceive. As a result, they indicated that they plan to remain closeted about their nonbelief and nonreligion.
The 15 PSTs in this study represent a rising trend in religiously unaffiliated Millenials (36%), 9% of whom are agnostic or atheist . With the rapid increase in the number of nonreligious elementary teachers entering the profession as Baby Boomers retire, we need to learn more about how these nonreligious teachers construct their self-identities, how they approach teaching, and most importantly how they manage their nonbelief and nonreligion in a workplace environment that they perceive to not be accommodating of their worldviews. | 2019-05-07T14:21:14.270Z | 2015-11-13T00:00:00.000 | {
"year": 2015,
"sha1": "7c20c399f67b6da9d6ae878e9d6c2a8b7b518e92",
"oa_license": "CCBY",
"oa_url": "http://www.secularismandnonreligion.org/articles/10.5334/snr.bf/galley/62/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bbf62ab19cc60b7d2297e6e8097d088c67664547",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
226563065 | pes2o/s2orc | v3-fos-license | (In)Visible Bleeding: The Menstrual Concealment Imperative
Wood offers a new conceptual framework, “the menstrual concealment imperative”, to explain how women’s internalization of menstrual discourse contributes to their disembodiment and self-objectification through menstrual “management”. This chapter critiques the medical system and menstrual hygiene industry for the (bio)medicalization of menstruation that establishes women as diseased and as unable to know their bodies. Wood suggests that women’s vigilance about menstrual concealment is not freely chosen, but a required self-disciplinary practice rooted in menstrual discourse that characterizes menstruation as stigmatized, taboo, and therefore shrouded in secrecy. The concealment imperative is a form of social control and a body project that keeps women disembodied and objectified. As a conceptual tool it has implications to understand the various ways that women’s bodies are regulated both at individual and social levels.
The conceptualization of menstruation as disease and the association of menstruation with femaleness has established menstruation as a political issue (for example , Bobel 2010;Ussher 2006). While Houppert has described the significance of the culture of concealment surrounding menstruation, this chapter explains menstrual concealment an as imperative that women adopt through their internalization of and adherence to menstrual discourse. "The menstrual concealment imperative" is a conceptual framework to explain how women's internalized perceptions of menstruation as diseased, taboo, and stigmatized contribute to their disembodiment and self-objectification (Roberts 2004). It suggests that women's vigilance about menstrual concealment is a form of self-surveillance and self-objectification that is fostered by the medicalization of women's bodies and neoliberal approaches to women's health. The potential for menstruation to be oppressive is rooted in a complex, multifaceted, and all-encompassing imperative for women that functions as gendered body politics to (re)produce the very conceptualization of women's bodies as othered (de Beauvoir 1952). Offering a critique of the medical system and menstrual hygiene industry, this paper analyzes menstrual discourse that establishes women as diseased and as unable to know their bodies. Using a neoliberal rhetoric of "choice" the menstrual hygiene industry cleverly posits menstrual concealment as "freedom" and thereby facilitates women's complicity in their own subjugation. I offer "the menstrual concealment imperative" as a theory to explain how women's internalization of the culture of concealment is form of social control and a body project (Brumberg 1998) that keeps women disembodied and oppressed.
Menstrual Discourse
In this section, I use Foucault's work on discourse to explain the significance of the production of menstrual knowledge as oppressive to women. First, I will discuss Foucault's conceptualization of discourse and biopower as it relates to menstrual discourse. Next, I will discuss the significance of menstrual taboos and menstrual stigma. Finally, I will discuss medicalization in terms of how menstrual discourse constructs women as diseased by virtue of their menses, and therefore how knowledge production by the medical field and the associated female hygiene industry establishes women as perpetually disempowered. Foucault's (1984) concept of biopower explains how social norms and expectations embedded in microlevels of everyday life coalesce into powerful discourses that shape what is considered normal on both individual and societal levels. Arguing that biopower is produced through discourse (and individuals' desire to adhere to it), he explains how pressure to conform to cultural norms produces individuals' voluntary self-subjugation. In this way, biopower is a form of social control enacted through individuals' internalization of dominant discourses that result in self-disciplinary practices. Foucault argues that biopower is especially salient in terms of regulating bodies through medicine and technology because of discourse structures social and individual attitudes, thoughts, and behaviors as if the knowledge is objective. However, this knowledge is characterized by a specific epistemological stance in order to maintain ideological control. Ussher (2006) argues that biopower constructs the female body as deviant, polluted, and "monstrous" based on reproductive processes like menstruation, regulating women's bodies through discourse that positions it as in need of surveillance and treatment. Menstrual discourse constructs women's bodies as diseased, shameful, and polluted. Foucault's concept of biopower elucidates how women's bodies are a site of self-discipline and how these practices produce 'docile bodies' (for example, Bartky 2014;Bordo 1989;Pylpa 1998;Patterson 2014). Pylpa explains: "… medicine creates the discourse that defines which bodies, activities, and behaviors are normal; at the level of practice, medical procedures are a principal source of the institutional regulation and disciplining of bodies" (30). In this way, menstrual discourse creates biopower at both micro and macro levels. Individuals voluntarily conform to disciplinary strategies of their bodies through their own desire (Foucault 1977), and in doing so biopower is produced and (re)produced. Foucault's work prompts an analysis of menstrual discourse that asks questions about how menstrual knowledge is produced and by whom, what constitutes knowing about menses, and who has authority and power to produce menstrual knowledge. Later in this chapter, I will use feminist analyses of Foucault's work on biopower to argue that women's internalization of menstrual discourse results in their self-surveillance of their menstruation and ultimately explains women's internalized need for menstrual concealment.
Menstrual Taboos, Stigma, and Silence
The prominence and significance of menstrual taboos and stigma in women's lives has been well documented by scholars in various disciplines for decades (for example , Bobel 2006;Brooks-Gunn and Ruble 1982;Buckley and Gottlieb 1988;Golub 1992;Roberts 2004;Stubbs and Costos 2004;Ussher 1989). Evidence of how menstrual taboos stigmatize women is evidenced in the description of how menstruation and menstrual blood have been variously described, as: simultaneously magical and poisonous (Golub 1992); an abomination (Rozin and Fallon 1987), disgusting and aversive (Bramwell 2001), contaminating (Laws 1990) unclean and unpure (Cicurel 2000), a threat to femininity (Lee 1994), and a blemish to a one's character (Johnston-Robledo and Chrisler 2013). Menstruating women are also perceived as a danger to men and a threat to male power (Delaney et al. 1988;Guterman et al. 2007).
The consequence of menstrual taboos for women's lives is significant and varied as prohibited behavior for menstruating women is contextualized culturally, geographically, and according to religious and other social practices. Many women report their need to maintain cleanliness during menses, and most women report altering their usual activities during menses (Jurgens and Powers 1991). Indeed, women's motivation to change their behavior during their menses may be a reasonable precaution as menstruating women are harshly judged and characterized as irritable and unsexy (Forbes et al. 2003). Roberts et al. (2002) found that by virtue of simply having a tampon visible in her bag, a woman is perceived as less likeable, less competent and that observers made an effort to physically distance themselves from her. Menstrual taboos function to separate, exclude, and even banish menstruating women from public and private spheres preventing their full participation in public life as well as in their own full subjectivity (for example, Johnston-Robledo and Chrisler 2013;Roberts 2004;Thomas 2007;Thornton 2013;Ussher 2006).
Silence perpetuates menstrual stigma and is a key indicator of the culture of concealment (Delaney et al. 1988;Houppert 1999;Kissling 2006). Menstruation is considered inappropriate public conversation to the extent that girls and women are often too uncomfortable to discuss the topic even with each other, healthcare providers, or family members (for example, Golub 1992;Houppert 1999;Johnston-Robledo and Chrisler 2013). Ussher (2006) describes women's "unspeakable bodies" referring to how the silence surrounding menstrual shame results in women's self-isolation. Similarly, Houppert's (1999) "culture of concealment" explains how sociocultural influences construct menstruation not just as taboo, shameful, and debilitating but also as invisible in US culture. She details the powerful influence that the menstrual product industry has to conceptualize menstruation as an illness, and how the development and advertisement of menstrual products create and reinforce women's insecurities around their periods as a hygiene crisis. In this way, menstrual product companies convince women to conceal their periods, and then provide their own products to enable that concealment, reinforcing cultural attitudes that menstruation is embarrassing and should be kept secret. Menstrual discourse disempowers women by not only constituting menstruation as a negative, taboo, and stigmatized event that women must conceal, but also by enabling others to produce knowledge about women's bodies that is not based on their own situated knowledge and experiences.
The Medicalization of Women's Bodies
In addition to the conceptualization of menstruation as taboo and stigmatized, menstrual discourse is also characterized by the medicalization of menstruation that further constructs menstruation as a disease process in need concealment via medical management. Conrad (1992) explains medicalization as a sociocultural process that functions as a form of social control. Women are especially vulnerable to medicalization and an overwhelming majority of women's natural life processes have been medicalized (Conrad 1992;Ehrenreich and English 2005). As such, feminist researchers and women's health activists critique the over-medicalization of normal, natural, healthy body processes that characterize women's everyday lives (for example, Kaufert and Gilbert 1986;Lippman 2004;Tiefer 1995). These scholars have demonstrated how women's bodies, especially reproductive processes, are medicalized as a form of political and social control to the detriment of women's health and lives (for example , Ussher 2006;Ruzek 1978;Wood et al. 2007).
The medicalization of women's bodies has significant consequences for women's lives. First, women are ideologically constructed as deficient, ill, and diseased to legitimate the need for medical treatment and constant medical surveillance. Secondly, medicalization functions as a form of social control by establishing medical practitioners as the experts on women's bodies based on women's illness as defined by the medical model. Kaufert and Gilbert (1986) argue that when women are not considered capable of knowing their bodies then their subjective experience of themselves shifts to that of a patient. As patients, women are morally obligated to be treated and medical practitioners are ethically required to diagnose and treat their assumed diseased state. In this way, the medicalization of menstruation contributes to menstrual discourse that positions women in a constant state of disease. As Ehrenreich and English (2005) explain, "Not only [are] women seen as sickly -sickness [is] seen as feminine" (22).
While scholarship on medicalization as a theoretical framework has existed for decades, biomedicalization has been more recently described as distinct from medicalization both in terms of historical context and an increased focus on techno-scientific processes (Clarke et al. 2010). Like medicalization, the biomedicalization of menstruation shapes menstrual discourse through the production of knowledge that establishes women as unable to know their own bodies. While medicalization controls bodies through defining disease, biomedicalization encourages the transformation of bodies based on the construct of health, so that biomedicalization is broader, more invasive, and reaches into lifestyle decisions around health, risk, illness, and wellness as a moral imperative. The biomedicalization of health is characterized by a more complex and insidious structure of knowledge production and dissemination, including the corporatization of medicine. No longer is medical discourse produced solely by medical experts, but also by pharmaceutical companies, media outlets, alternative medicine practitioners, patient self-help groups, and research conglomerates that often advertise products, drugs, technologies, and information targeted directly to consumers. In biomedicalization, it is not an illness, disease, or, dysfunction that is treated as in medicalization, but the risk of these (Armstrong 1995). Because "health" is so broadly conceptualized, so poorly defined by so many "health experts," and health recommendations constantly change, individuals no longer need to be sick to be treated; simply the risk of eventual poor health is sufficient for intervention. When health is no longer conceptualized as the absence of disease, the possibilities for diagnosis, treatment, and intervention to address the risk of illness is literally endless (Clarke et al. 2010).
Despite theoretical distinctions between medicalization and biomedicalization, I use the term (bio)medicalization in this chapter to refer to the simultaneous ways that women's menstrual bodies are transformed, regulated, and controlled through menstrual discourse. Because menstruation is conceptualized from a biomedical perspective as a form of illness, women are encouraged to transform their bodies to prevent potential hazards of menstruation. Technological and pharmaceutical interventions promise menstrual concealment to women as an individual "choice" (for example, menstrual suppression) by transforming women's menstrual bodies into non-bleeding ones. Similarly, menstrual products are marketed to women as hygiene products so that women can manage (conceal) their menses as part of their individual responsibility for their own health. In this way, (bio)medicalization contributes to menstrual discourse by establishing the amorphous healthcare industry as the experts on menstruation while assigning women to the perpetual role of patient. Together, the (bio)medicalization of menstruation and menstrual stigmas and taboos function to create a menstrual discourse which controls women's bodies and lives based on epistemologically flawed biomedical ideologies. Menstrual concealment is constituted in menstrual discourse as an individual women's "choice" as part of her own pursuit of her health. In order to understand how the menstrual concealment becomes an imperative for individual women, next I will consider how women internalize menstrual discourse through self-surveillance.
self-surveillance anD self-objectification
In this section, I explain how menstrual discourse and biopower facilitate women's internalization of menstrual self-surveillance as imperative menstrual concealment practices. According to Foucault (1977), individuals come to desire conformity to discourse through biopower and engage in a resultant process of self-surveillance through panoptical power. The panopticon is a paragon of how to socialize masses into a state of constant self-policing so that individuals conform to "normal" behaviors and attitudes without the need for external enforcement. Through discourse, these norms become so desirable for individuals to obey that they then voluntarily self-monitor their adherence to them. Knowledge produces discourse that establishes cultural norms that individuals desire to conform to; because the knowledge is presented as objective and "true," individuals organize their behavior around the discourse, thereby enacting their own practice of self-surveillance and self-discipline. This constant self-surveillance and self-regulation is posited as individualism, despite the fact that it is culturally created, and in this way biopower is difficult to see as external to the individual (Bartky 2014 ;Foucault 1977).
Bartky (2014) builds on Foucault's work, noting how gendered notions of power in self-surveillance practices are especially problematic for women and their bodies. She argues that the panoptical gaze is male, and therefore that women's self-surveillance based on this patriarchal view results in their disembodiment. Barkty discusses her work in the context of women's self-monitoring of their own appearance, explaining that women internalize and embody patriarchal notions of beauty and subsequently adopt body projects (Brumberg 1998) to change their bodies to adhere to cultural beauty norms. Women readily enact these body projects as part of their own self-surveillance and self-disciplinary practices resulting in the production of their own docile bodies. Women's discipline of their own bodies via an internalized patriarchal panoptical view is insidious because the authority to maintain control of women's bodies is both nowhere and everywhere, and it is neither natural nor completely voluntary. That is, when women practice self-disciplining body projects they do so without coercion but not by their own free will either.
Just as Bartky uses Foucault's work on biopower, discourse, and self-surveillance to analyze women's self-subjugation around appearance, this chapter applies those concepts to menstruation. Menstrual discourse is characterized by negative views of menstruation that encourage girls and women to self-surveil and manage their bodies to maintain menstrual secrecy (for example, Chrisler 2004; Erchull et al. 2002;Martin 1992;Stubbs and Costos 2004). Whether learned through mothers, educators, product advertisements, or other girls, menstrual discourse encourages girls and women to perceive their bodies as polluted and shameful, and as such, out of their control (Chrisler 2004; Jackson and Falmagne 2013). Ussher (2006) argues that when menstruation is positioned as a form of embodied pathology menstrual discourse encourages women's self-surveillance, self-policing, self-silencing, self-blame, self-sacrifice, and contributes to women's guilt, shame, and blaming of the body.
Other feminist scholars have also established that women's bodies are sites of discipline and that subjectivity is tied to the body, which is constantly in need of management, containment, and discipline (for example, Bartky 2014 ;Bordo 1990;Lee 1994;Martin 1992;Young 1997). Moreover, because women are primarily valued based on their appearance, self-disciplinary body projects (Brumberg 1998) are strongly associated with femininity. Such body projects reify patriarchal constructs of femininity as women judge themselves as "good" or "bad" women based on how well they conform to standards of femininity that require them to distance their bodies from their selves (Roberts and Waters 2004). In order to be "good," women are necessarily disembodied, objectified, and self-silenced from their menstrual bodies (Roberts and Waters 2004;Ussher 2006). Ussher argues that menstrual discourse posits women as closer to nature due to their bodily subjectivity, and in this way women's reproductive bodies are a marker of "hegemonic constructions of femininity" (2). As such, menstrual discourse constructs women's bodies as pathological and defines women based on their reproductive capacity, associating femininity with women's ability to maintain the secrecy of their polluted bodies. Ussher explains that the implications of defining women as a held hostage by their menstrual bodies are significant in how women can inhabit and know their own bodies as well as the development of women's subjectivity.
Similarly, Persdotter (2020, this volume) introduces the concept of menstronormativity to expand menstrual discourse as a more all-encompassing process and system that describes the ordering of menstruation on both sociocultural levels and in individual's lives. Menstronormativity, the aggregate of menstrual norms, stigmas, etiquette, and discourse, describes the regulation of some menstrual subjectivities as "good" and others as "bad." Women's acceptance of menstronormativity fuels self-surveillance and self-disciplinary body projects. The process through which women adopt this internalized male gaze of their bodies and selves can be understood through objectification theory and ultimately, women's self-objectification.
Objectification theory explains how the sexual objectification of girls and women functions to separate their bodies from their personhood so that female bodies are viewed in terms of how they serve others, often the sexual interests of men (Bartky 1990;Fredrickson and Roberts 1997). When women internalize this objectification, they adopt an outsider's view of themselves, evaluating their bodies and appearance from a sexually objectified gaze; this is the process of self-objectification (Fredrickson and Roberts 1997). Fredrickson and Roberts argue that self-objectification explains a woman's sense of self-detachment from her own body. As such, self-objectification is one-way women unwittingly participate in their own oppression.
Self-objectification is a common practice for women, especially around reproductive functions like menstruation, and it is associated with a host of negative health effects including increased body self-surveillance, body shame, and negative attitudes about body functions like menstruation (Johnston-Robledo et al. 2003;Roberts and Waters 2004;Roberts 2004). Roberts (2004) applies objectification theory to menstruation, arguing that as women internalize US culture's sexual objectification of themselves that menstruation must be concealed in order to appear as adequately feminine, attractive, and sexually desirable. Women who engage in self-objectifying menstrual practices also report more self-surveillance and associated feelings of shame, self-loathing, and self-disgust. In this way, self-objectification prevents women from inhabiting their bodies in an emotionally and physically authentic way. This contributes to women's alienation from their subjective experiences and is a form of de-selfing as women replace their own sense of self with an outsider's (male) gaze (Roberts and Waters 2004, 13).
Because menstruation is viewed as the antithesis of a sexually desirable feminine body, women learn that to be sexually desirable, attractive, and feminine menstruation must be concealed (Grose and Grabe 2014). For instance, Erchull's research found that women's bodies are portrayed as highly sexualized even in ads for menstrual products, illustrating the need for women to use such products for menstrual concealment in order to ensure their constant sexually availability. Other researchers have also found that women's attitudes toward their menstrual bodies are incompatible with their internalized valuing of their bodies as sexually desirable (Johnston-Robledo et al. 2007). In this way, women's self-objectification contributes to their desire to distance themselves from their bodies via menstrual concealment (Erchull 2013;Grose and Grabe 2014;Roberts 2004;Roberts et al. 2002) or menstrual suppression (Johnston-Robledo et al. 2007).
As problematic as self-objectification and its associated risks are for women, adherence to idealized female body standards may be logistically beneficial to women in a patriarchal culture. For example, Fredrickson and Roberts (1997) propose that self-objectification might appear to women as a strategy to claim power in a patriarchal system in which attractiveness is currency. Just as women benefit economically from being attractive, women who distance themselves from their bodies, especially menstruating bodies that are feared and abhorred, have more opportunities in the public sphere (Roberts and Waters 2004). In this way, menstrual concealment serves as a tool to distance oneself from the feminine, and it does benefit women in terms of their acceptance in a patriarchal society. In this way self-objectification can be considered a survival strategy to present their bodies in an idealized form, and in this way self-objectification is about self-surveillance (Roberts and Waters 2004).
the Menstrual concealMent iMperative
As women self-objectify through a patriarchal body-hating view of themselves, menstrual concealment offers women a way to "free" themselves from their monstrous body. Women not only feel obligated to render their periods invisible, but when framed as an empowering choice, menstrual concealment falsely offers women a sense of control over their out of control bodies. As a theory, the "menstrual concealment imperative" explains how women internalize menstrual discourse and willingly practice self-surveillance and self-disciplining body projects, even though such practices are self-subjugating and disempowering. This section of the chapter will explain menstrual concealment as a required form of self-surveillance in which women become disembodied through this self-disciplinary practice in their search for "freedom" from and control over their bodies.
The menstrual concealment imperative is about freedom and control to women; "freedom" from their bodies that mark them as othered and are a significant source of their oppression. The concealment imperative offers a solution to women's objectified, pathologized, and then self-objectified bodies, yet women become disembodied through these self-disciplinary concealment practices. In this way, the concealment imperative is a panopticon-like form of social control that women willingly participate in, and as they do so women become complicit in a menstrual discourse that requires them to be disembodied and objectified. When women internalize these negative perspectives via self-objectification, menstrual concealment is no longer a cultural norm but a moral imperative to exist in a patriarchal society. Without agency to contextualize their own menstrual experiences, women desire distance from their menstrual bodies and menstrual concealment offers this disembodiment. The menstrual concealment imperative is a self-perpetuating cycle of self-surveillance, self-discipline, and self-subjugation.
Menstrual concealment is imperative for menstruating women for several reasons. First, menstrual concealment is required for women to be considered as competent (Roberts et al. 2002), attractive and sexually appealing (Erchull 2013). In order to succeed in public life women must transform their bodies to meet patriarchal expectations for how their bodies appear to others and how their bodies impact others' feelings about them in terms of comfort, sexual attractiveness, and hygiene. Menstrual concealment benefits women socially, politically, and personally because menstruation marks bodies as feminine and therefore as weak. Practically speaking, women are more successful in their lives if they appear unencumbered by their menses. The menstrual concealment imperative explains practical benefits in women's public and private lives that may result from their concealment practices.
Secondly, women perceive menstrual concealment as imperative because menstrual discourse dictates how women experience their menstruation as polluted, unclean, disgusting, and as an illness to be managed. Menstrual discourse conceptualizes menstruation as pathological and posits the transformation of the diseased body as the "right" way to avoid possible risks associated with menstruation. In (bio)medicalization terms, menstrual concealment is both control over and transformation of the female body into one that is less stigmatized. Surveillance medicine requires the management of the menstrual body through menstrual concealment as a moral obligation for women as patients and health care consumers to avoid ambiguous risks associated with the illness of menstruation. Menstrual concealment is imperative for women to avoid illness and consider themselves "healthy." Moreover, women may feel out of control in their bodies when their bodies are positioned as monstrous, disgusting, and diseased; menstrual products are offered as a way for women to "control" their bodies. As such, menstrual products are a technology used to transform the dysfunction of the menstruating female body into a non-menstruating one (Vostrel 2008). Thus, the menstrual concealment imperative is constructed and (re)produced through menstrual discourse and menstronormativity to allow women to dissociate from their bodies that mark them in oppressive ways. Because stigma surrounding the menstrual body threatens women's full access to the public sphere (Thomas 2007), it is understandable that women willingly become disembodied as a potentially liberating tactic in patriarchal culture.
Finally, menstrual concealment is imperative because in women's private lives it marks them as "good women." Patriarchal standards of femininity are rooted in how women's bodies serve others; women's bodies must be clean, sexually attractive, and not inconvenient or uncomfortable for others. Girls adopt the concealment imperative very early; at menarche, they learn how to manage their menstrual shame by the concealment of their menstruation in order to prevent others' discomfort (Kissling 1996;Jackson and Falmagne 2013). Through self-objectification of themselves as monstrous, girls and women adopt self-surveillance and self-disciplinary practices to conceal their menses. Thus, menstrual concealment is imperative for women to consider themselves "good" based on patriarchal standards of femininity that require women's docile bodies; there is little possibility for women to avoid menstrual concealment and still claim an identity as "a good woman," "healthy," "attractive," or even "smart." When women self-objectify and internalize hegemonic requirements for their bodies based on patriarchal standards of femininity, women must necessarily become disembodied or risk self-hatred (Roberts 2004). That is, the inability for women to avoid self-hatred without menstrual concealment illustrates the imperative nature of menstrual concealment.
Women may interpret menstrual management and concealment as a form of empowerment and control over their bodies, especially when menstrual concealment is marketed to women as convenience that is characterized as "freedom." The menstrual product industry has created a market for their own products based on the culture of concealment (Houppert 1999), referring to these products, as "feminine hygiene" and "sanitary protection" to reinforce the notion that menstruation is an unsanitary condition that girls and women need to protect themselves and others from (Vostrel 2008). Menstrual product advertising and direct to consumer (product) education reinforces (bio)medicalization and reifies women's need to conceal their menstruation. Products are marketed to girls and women as convenience and "freedom" from their bodies because of how effective they are at enabling women to conceal their menstruation. For instance, Proctor and Gamble advertise "Always My Fit" to women as their allies in "better period protection" through a custom fit sizing chart, now available on the top of all pad packages. The brand claims that, "60% of women wear the wrong size pad and 100% can change that!" (always.com 2017, "Tips and Advice Choosing a Pad"). Using a neoliberal approach to target women's self-loathing of their menstrual bodies, Always My Fit offers pseudo control, choices, and power to women, "… when many women experience a leak they often blame themselves . . . the truth is that a lot of women do not know that leak free periods are possible [if you find] the right pad coverage." For these reasons, menstrual concealment may feel empowering to women, especially as the commodification of menstruation offers women the ability to purchase freedom from their bodies through menstrual products that claim to be specially designed for them. The pressure for body transformation, like menstrual concealment, as a form of individualism and "choice" is characteristic of (bio)medicalization, panoptical forms of social control, and neoliberalism. As such, it is hard to identify as imperative. Bartky (2014) argues that the lack of an enforcer in the disciplining of female docile bodies makes women's subordination seem isolated, normal, and appears as more of an individual choice than an institutional mandate. In this way, the menstrual concealment imperative is both invisible and self-sustaining.
Yet, because menstrual concealment is imperative for women's acceptance and success in both their public and private lives, this practice of self-discipline is not a true choice. I argue that women "choose" to become disembodied and self-subjugating as a form of false consciousness due in part to the conceptualization of concealment as a cultural norm instead of as an imperative. Reframing menstrual concealment practices as imperative self-disciplining behaviors offer a framework to understand women's "choice" to conceal menstruation as a false one for several reasons. First, the risk for women not to conceal is tremendous including being judged as incompetent, emotional, unattractive, unclean, and diseased. Women may prudently judge that given other forms of gender oppression, menstrual concealment benefits them in important logistical ways like obtaining or maintaining employment and/or long-term partnerships or marriage. For example, Bartky (2014) discusses that women risk the refusal of male patronage and related intimacy as well as success in their economic and social livelihood when they avoid forms of bodily self-discipline. Moreover, a woman's sense of herself will likely be compromised by avoiding self-disciplinary practices because they are so critical to social constructions of herself as a woman and individual (Bartky 2014). Secondly, women are often not aware that menstrual concealment is a self-disciplinary practice as a result of their own self-objectification of their bodies. When menstrual concealment is marketed to women as convenience or empowerment, the imperative nature of concealment is rendered invisible. Third, women cannot make a true choice about their menstruation when they are distanced from their bodies. Without agency and subjectivity, women's ability to make decisions, as is characteristic of true choice, is impossible. Fahs (2014) distinguishes between the 'freedom to' and the 'freedom from' in regard to women's subjectivity and agency, arguing that a feminist understanding of freedom must involve both aspects of freedom. In this way, women's freedom to choose menstrual concealment is dependent on women's freedom from menstrual stigmas that mandate menstrual concealment. Finally, menstrual concealment cannot be a true choice for women when alternatives to it are not presented. For menstrual concealment to be a viable choice, women must be able to choose to claim their menstrual realities just as freely as they opt to conceal menses.
the future
Feminist menstrual researchers have remarked on "unspeakable womanhood" (Ussher 2017) and a missing discourse around women's reproductive bodies (Roberts 2017). I offer the menstrual concealment imperative as a conceptual tool for menstrual scholars and researchers to refer to the totality of the various interrelated processes and layered structural barriers that contribute to women's oppression via menstrual discourse. The invisibility of the menstrual concealment imperative contributes to how insidious it is in women's lives; when women internalize menstrual discourse they become disembodied, self-objectify and willingly engage in their own self-surveillance and self-discipline. One possible implication of the menstrual concealment imperative as a theoretical tool is that by describing and naming it, then the imperative for women to conceal their menstruation is visible and less insidious. This visibility lends legitimacy to women's experiences and therefore creates the possibility for resistance to menstrual concealment as imperative for women's freedom and success in the private and public spheres. Resistance to the menstrual concealment imperative must begin with making it visible, as Ussher explains: "Identifying self-policing practices allows women to develop more empowering strategies for reducing or preventing . . . distress, developing an ethic of care for the self, and no longer blaming the body …" (2).
Notably, women's voices and experiences are largely missing from menstrual discourse because of their disembodiment, and therefore women's own voices and positive experiences of menstruation can be seen as a form of resistance. Patterson (2014) argues that resistance to normative menstrual discourse can range from being "period positive" to more radical forms of menstrual activism, as is characteristic of menarchists: "Menarchists argue that women need to take back the power of their bodies by publicly undermining patriarchal attempts at control that lead to women's bodily self-loathing. They call on women to reclaim their bleeding bodies, and the entitlement to bleed without secrecy and shame" (105-6). Bobel (2010) explains how menstrual activists, acting in their individual lives, can create change at level of menstrual discourse: "The activists subvert the precepts of the dominant narrative of menstruation and strive for an authentic autonomous embodiment. Their aim is to seize agentic menstrual consciousness from the docile, disciplined body and stimulate new ways of knowing and being that neither shame nor silence" (41).
Yet, to resist the menstrual concealment imperative on an individual level, a woman has to resist the internalization of her objectified menstrual body and resulting self-discipline in the form of "menstrual management." Thus, I argue that menstrual management of any kind, even with environmentally conscious do-it-yourself, reusable products, is a defining characteristic of the menstrual concealment imperative because "management" is a form of concealment. As Persdotter (2020, this volume) argues with her concept of menstronormativity, we exist in and simultaneously produce menstrual norms so that it is hard to operate outside the boundaries of this power. Foucault also struggled with the possibility of how to transgress the power of discourse while inside discourse; one possibility for imagining resistance to the menstrual concealment imperative is via his work on resistance as counter-power (Pickett 1996). Free bleeding, or the refusal to use products to collect menstrual blood, is one possible form of women's resistance to the menstrual concealment imperative. In fact, free bleeding as a movement is a form of collective unity and activism among menstruators against menstrual stigma, shame, and the culture of concealment that fuels the need for menstrual management and menstrual "hygiene" products that are increasingly commodified in capitalist cultures (for example , Bobel 2006;Fahs 2016;Lapekas 2013).
In conclusion, in order to resist the concealment imperative at the level of discourse, we must be able to locate it as just one possibility of relating to our menstrual bodies; in order to contest menstrual concealment as imperative, we must locate the imperative as a false truth that appears as all-encompassing because it serves to keep women simultaneously tied to and alienated from their bodies as part of what it means to be "good." The menstrual concealment imperative is a body project (Brumberg 1998) that keeps women in a psychological state of self-hatred and constantly preoccupied with their physical bodies as a way to keep women busy and "in their place." After all, the menstrual concealment imperative is rooted in menstrual taboos and stigmas based on men's fear of women's menstruation (Delaney et al. 1988;Guterman et al. 2007) and women's own self-internalized fear of their menstrual bodies. The menstrual concealment imperative has implications to understand the various ways in which women's bodies are regulated at sociocultural and individual levels. As women's ability to control their own bodies is increasingly under political attack, it is critical to illuminate the ways in which women's disembodiment and willingness to distance themselves from their authentic experiences feeds patriarchal control of women's bodies and therefore their lives. If menstrual concealment can be disentangled from menstrual discourse that dictates self-surveillance and self-objectification of women's self-shamed bleeding bodies, the possibility exists for women to navigate their menstrual experiences with embodied subjectivity. note 1. I acknowledge the inherent risks associated with essentializing 'women' as menstruators, and yet the feminization of women's reproductive bodies as polluted and diseased contributes to menstrual concealment as imperative for female bodies. See Bobel (2010, 11-13) for a discussion of the gendered language around menstruators. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2020-07-30T02:04:58.955Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "b5066d656fef9b453ad8e3ebf09e165bd699013b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-15-0614-7_25.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "00f881baad28e1489bf872b97abd4a59431f85cc",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
16384377 | pes2o/s2orc | v3-fos-license | Wnt5a Inhibits the Proliferation and Melanogenesis of Melanocytes
Wnt5a, which is a noncanonical Wnt molecule, has been shown to be involved in a variety of developmental processes and cellular functions. In this study, we used “melan-a” cells as a cell model to investigate the effects of Wnt5a on melanocyte proliferation and melanogenesis, and to elucidate the possible mechanisms involved. We infected melan-a cells with recombinant Wnt5a adenoviruses to express Wnt5a protein and to simulate the Wnt5a processing environment. MTT assay and BrdU incorporation assay revealed that Wnt5a significantly inhibited the proliferation of melan-a cells. Melanin content and tyrosinase activity assays showed that Wnt5a was an inhibitor of melanin synthesis. Furthermore, RT-PCR and Western blot showed that this suppressive effect depended on noncanonical Wnt/Ror2 pathway activation and accessed the inhibition of the canonical Wnt pathway. The above results provided a novel insight into the role of Wnt5a and its related signaling in melanocyte homeostasis.
Introduction
Melanocytes are pigment-producing cells and synthesize melanin that governs skin and hair color and protects individuals from harmful ultraviolet rays [1][2][3] . Melanocytes produce melanin and transfer them via their dendrite to adjacent keratinocytes. The pigment melanin is well known to protect the skin from harmful UV rays through its optical and chemical filtering properties 1,4,5 . However, over-production and accumulation of melanin due to extreme exposure to UV irradiation or chronic inflammation could lead to various skin disorders, such as melanoma, nevus, freckles and geriatric pigment spots 6 . In mammalian melanocytes, melanin biosynthesis is mainly catalyzed by three melanocyte-specific enzymes: tyrosinase, tyrosinase-related protein-1 (TRP1) and tyrosinase-related protein-2 (TRP2/DCT) 4 . Tyrosinase is the key regulatory enzyme of melanogenesis while TRP1 and TRP2 act as modifiers of melanogenic pathway velocity, and perhaps as regulators of other melanocyte functions 4 .
Wnts are secreted cysteine-rich glycoproteins that participate in cell proliferation, differentiation, and migration [7][8][9][10] . To date at least 19 Wnt members have been identified in mammals. Traditionally, the different Wnts have been classified as canonical or noncanonical 11 . In the best characterized canonical pathway, Wnt binding to Frizzled and LRP5/6 co-receptors induces β-catenin stabilization and translocation to the nucleus where it regulates the transcription of target genes 12,13 . Inversely, the noncanonical pathway is less well defined and its signals are transduced independently of β-catenin. Noncanonical pathways are diverse and grouped into several categories, such as the planar cell polarity Ivyspring International Publisher pathway (PCP), Wnt/Ca 2+ signaling pathway and Wnt/Ror2 signaling pathway 12,13 .
Previous studies revealed that canonical Wnt signaling, specifically Wnt1 and Wnt3a, has crucial roles in the development of melanocytes [14][15][16] . Our recent work showed that Wnt3a contributes to promote melanocyte melanogenesis through the upregulation of MITF, tyrosinase and TRP1 17,18 . Despite the well-known function of canonical Wnt signaling in melanocytes, the role of noncanonical Wnt signaling in melanocytes remains undetermined. Wnt5a, as a typical noncanonical Wnt member, is essential for cell growth, differentiation and migration. Multiple studies revealed that Wnt5a promotes melanoma tumorigenesis, invasion and metastasis [19][20][21] . Furthermore, Wnt5a inhibits the expression of melanogenic antigens in melanoma 22 . However, the precise function of Wnt5a in normal melanocytes is unclear yet. Therefore, this study was conducted to investigate the effects of Wnt5a on melanocyte proliferation and melanogenesis, and to elucidate the possible mechanisms involved.
Adenovirus amplification and infection
The adenoviruses expressing green fluorescent protein (AdGFP), Wnt5a protein (AdWnt5a, also expressing GFP), Wnt3a protein (AdWnt3a, also expressing GFP) were kindly provided by Dr. T.-C. He. The adenoviruses were propagated in HEK 293 cells as described previously 24 . After being purified by caesium chloride gradient centrifugation, adenoviruses were dialyzed into storage buffer, and their titers were determined. For infection, melan-a cells were plated on 6-well or 24-well plates at a density of 2×10 4 cells/cm 2 in the growth medium for 12 h, the cells were then grown in medium supplemented with AdWnt5a or AdGFP, at ultimate titres of 10 6 PFU/mL.
MTT assay
Melan-a cells were cultured in 96-well plates at an initial density of 5×10 3 cells per well for 12 h, and then treated with Ad-Wnt5a or Ad-GFP. After 48 h, the MTT (Sigma, USA) was added to each well and the cells were incubated at 37° for 4 h. The medium was removed and dimethyl sulfoxide (DMSO) was added to dissolve the formazan crystals. The absorbance was measured at 490 nm with an ELISA reader. The experiments were performed in triplicate.
BrdU incorporation assay
Melan-a cells were treated with Ad-GFP or Ad-Wnt5a for 48h, and then incubated with 10 μM BrdU (Sigma, USA) for 4 h. The detection of BrdU was performed as described previously 17 . Briefly, after incubation in HCL and Na 2 B 4 O 7 , the cells were stained with mouse anti-BrdU antibody (Zhongshan, China, 1:100), then incubated with goat anti-mouse Cy3-conjugated secondary antibody (Beyotim, China, 1:300), and finally counterstained with DAPI. Cells were visualized using an upright BH2 microscope (Olympus, Japan) and quantified by counting BrdU-positive cells in 6 independent areas. The experiments were repeated three times.
Melanin content assay
Melan-a cells were cultured in 6-well plates at a concentration of 1×10 5 cells/well overnight and then infected with AdWnt5a or AdGFP for 48 h. Then the cells were trypsinized and counted. The same number (1×10 5 ) of cells were collected and the pellets were dissolved in 1 mL of 1 M NaOH at 80°C for 1 h. Melanin concentrations were measured by absorbance at 405 nm. The experiments were performed at least three times.
Tyrosinase activity assay
Tyrosinase activity assays were performed according to the method previously reported 25 . Melan-a cells in 6-well plates were infected with AdWnt5a or AdGFP for 48 h, then lysed by freezing-thawing cycles in 200 µL 1% Triton X-100/PBS. The lysates were clarified by centrifugation, 80 µL of the supernatant were transferred into 96-well plates and 20 µL of 2 mg/mL L-Dopa (Sigma, USA) were added. After incubation for 2 h at 37°C, absorbance was measured at 490 nm. Total cellular proteins were determined to normalize samples. The measurements were repeated at least three times.
Isolation of Total RNA and RT-PCR
Total RNA was extracted at the indicated time-points using a Trizol Kit (Invitrogen, USA) and reverse-transcribed using a reverse transcription (RT) kit (Toyobo, Japan). Semi-quantitative PCR was performed using primers for Wnt5a, GADPH, β-catenin, Tyrosinase, TRP1, Ror2, JUN, JNK1 and JNK2 (see Table 1 for primer sequences and amplicon size). PCR reactions were performed using a touchdown proto-col previously described 26 . Briefly, touchdown PCR was performed with the following program: 1 cycle at 94°C for 2 min, 12 cycles at 92°C for 20 s, 68°C for 30 s, and 70°C for 45 s with a decrease of one degree per cycle, and 22 cycles at 92°C for 20 s, 55°C for 30 s, and 70°C for 45 s. PCR products were separated by gel electrophoresis in 2% agarose gels and visualized by ethidium bromide staining.
Western blot analysis
Whole protein was extracted in RIPA lysis buffer (Beyontime, China), determined by the BCA Protein Assay Kit (Beyontime, China), denatured by boiling, and subjected to 10% SDS-PAGE. Then the protein was transferred onto a PVDF membrane. After blocked with 5% fat-free milk, the membranes were probed with rabbit anti-Wnt5a antibody (1:200, R&D, USA), goat anti-TRP1 antibody (1:1000, Santa Cruz, USA), goat anti-tyrosinase antibody (1:1000, Santa Cruz, USA) at 4°C overnight. Blots were then incubated with HRP-conjugated secondary antibody for 1 h. Proteins of interest were visualized on X-ray film by means of the ECL western blot detection system.
Subcutaneous implantation and Fontana-Masson melanin staining
Melan-a cells were infected with AdGFP or AdWnt5a for 36h, and collected for subcutaneous injection (10 6 cells/injection) into the back of athymic nude (nu/nu) mice (4-6 wk). Animals were sacrificed 3 days later. Implanted sites were removed and fixed in 4% paraform. Paraffin sections were processed, and melanin granules were coloured with Fontana-Masson staining, cell nuclei were counterstained with hematoxylin.
Statistical analysis
Data are presented as mean ± SD of three independent experiments. Statistical analysis between groups was performed by ANOVA using SPSS 18.0. p < 0.05 was considered to be statistically significant.
Wnt5a inhibits the proliferation of melan-a cells
To explore the possible role of Wnt5a in melanocytes, we used mouse melanocyte line (melan-a) as an in vitro cell model and infected the cells with AdWnt5a to serve as the production source of Wnt5a protein. The adenovirus infection efficiency was examined by green fluorescence (Fig. 1 A) and the expression of Wnt5a was confirmed by RT-PCR and Western Blot (Fig.1 B).
To determine the effect of Wnt5a on the proliferation of melan-a cells, we infected melan-a cells with different doses of AdWnt5a or AdGFP as control. After 48 hours, the MTT assay showed that Wnt5a inhibited the proliferation of melan-a cells in a dose-dependent manner compared to GFP (Fig. 1 C). Similarly, the BrdU incorporation assay indicated the ratio of proliferating AdWnt5a-infected cells was lower than that in controls (p < 0.05) (Fig.1 D). The results implied that Wnt5a inhibited the proliferation of melan-a cells.
Wnt5a inhibited the melanogenesis of melan-a cells
Since tyrosinase is the key regulatory enzyme for melanin synthesis, its activity is a marker for melanocyte melanogenesis. As shown in Figure 2 A, Wnt5a inhibited the tyrosinase activity of melan-a cells in a dose-dependent manner compared to GFP (p < 0.05). In conformity with the results of the tyrosinase activity assay, AdWnt5a-infected cells decreased melanin synthesis significantly compared to AdGFP-infected cells (p < 0.05) (Fig. 2 B).
To investigate how Wnt5a inhibits melanin synthesis, we analyzed the expression levels of the melanogenic enzymes, tyrosinase and TRP1. A great significant decrease of tyrosinase and TRP1 mRNA were detected in AdWnt5a-infected cells rather than control especially at 48 hours (Fig. 2 C). Western blot analyses also showed that Wnt5a did remarkably decrease the protein levels of tyrosinase and TRP1 (Fig. 2 D). And subcutaneous implantation experiments verified that Wnt5a inhibited melanin synthesis in vivo (Fig. 2 E).
Wnt5a activated the Wnt/Ror2 signaling pathway in melan-a cells
Since Wnt5a may activate the noncanonical pathway via the Wnt/Ror2 signaling pathway 27 , we first determined if Wnt5a stimulated Wnt/Ror2 signaling pathway member mRNA expression levels in melan-a cells by RT-PCR. Our analyses showed that Wnt5a increased the expression levels of Ror2, c-JUN, JNK1 and JNK2 in a time-dependent manner. The mRNA levels reached a peak at 24 h and then recovered by 48 h (Fig. 3). The results suggested that Wnt5a might activate Wnt/Ror2 signaling pathway in melan-a cells. The cells were infected with AdGFP, AdWnt3a, or co-infected with AdWnt3a and AdWnt5a. After 48 h, tyrosinase activity was analyzed by tyrosinase activity assay. (C-D) Effect of Wnt5a on the expression of β-catenin, TRP1 and tyrosinase in AdWnt3a-infected melan-a cells. The cells were infected with AdGFP, AdWnt3a, or co-infected with AdWnt3a and AdWnt5a for 48 h. (C) RT-PCR analyses were performed with primers specific for β-catenin, TRP1, tyrosinase and GADPH (Left) and the relative mRNA expression levels were quantitatively measured (Right). (D) Western blot analyses were performed with antibodies specific for β-catenin, TRP1, tyrosinase and GADPH (Left) and the relative protein expression levels were quantitatively measured (Right). These data were representative results of three independent experiments. *p< 0.05.
Wnt5a antagonized canonical Wnt signaling pathway in melan-a cells
It has been reported that Wnt5a/Ror2 signaling also antagonized the canonical Wnt signaling pathway 27-28 , so we tested the expression level of β-catenin in AdWnt5a-infected cells. Both RT-PCR and Western blot analyses showed that Wnt5a significantly decreased the expression level of β-catenin in melan-a cells (Fig. 4 A).
To further examine the function of Wnt5a in canonical Wnt signaling pathway in melan-a cells, we infected the cells with AdGFP, AdWnt3a, or co-infected with AdWnt3a and AdWnt5a. We tested the expression level of β-catenin, as shown in Figure 4 C and D. A significant decrease in β-catenin protein levels was detected in co-infected cells compared with AdWnt3a-infected cells, suggested that Wnt5a could antagonize the canonical Wnt signaling pathway by reversing β-catenin expression driven by Wnt3a. The tyrosinase activity assay revealed that Wnt3a increased the tyrosinase activity and Wnt5a remarkably reversed the tyrosinase activity induced by Wnt3a (Fig. 4 B). We then studied the expression of tyrosinase and TRP1 in co-infected cells compared with AdWnt3a-infected cells. RT-PCR and Western blot analyses each showed that Wnt5a down-regulated the expression levels of these pigmentation-related mRNAs and proteins upregulated by Wnt3a (Fig. 4 C and D).
These results suggested that Wnt5a antagonized the canonical Wnt signaling pathway in melan-a cells.
Discussion
Wnt signaling plays an important role in essential developmental processes such as proliferation, migration, and differentiation [29][30][31] . As a representative noncanonical molecule, Wnt5a has been studied especially in melanoma 32-34 but its role in normal melanocytes is not clearly understood.
In order to investigate the possible function of Wnt5a in melanocytes, we introduced the "melan-a" cell line which was derived from normal epidermal melanoblasts of C57BL mice 23 . In this study, we infected melan-a cells with AdWnt5a and verified that Wnt5a protein was efficiently expressed. Therefore we used melan-a cells infected with recombinant adenoviruses as an in vitro cell model to explore the effect of Wnt5a in melanocytes.
Wnts control various cellular functions, including proliferation. Previous studies showed that Wnt5a inhibited the proliferation of human dental papilla cells, human endothelial cells, and B cells 8,35,36 . In th is study, MTT and BrdU incorporation assays showed that Wnt5a also suppressed the proliferation of melan-a cells.
Then we investigated the influence of Wnt5a on the melanogenesis of melanocytes. The melanin content assay and Tyrosinase activity assay both indicated that Wnt5a inhibited melanogenesis in melan-a cells. RT-PCR and Western blot analyses revealed that Wnt5a down-regulated the expression level of the pigment cell-specific genes, including tyrosinase and TRP1 in melan-a cells. Our results suggested that Wnt5a inhibited melanin synthesis through the down-regulation of pigment cell-specific genes in melanocytes.
The noncanonical Wnt signaling pathway is often referred to as β-catenin-independent and can be divided into several categories, such as PCP, Wnt/Ca 2+ and Wnt/Ror2 signaling pathways 27 . To clarify which pathway Wnt5a induced in melanocytes, we detected RhoA, Dvl and Ror2, which are classical molecules in PCP, Wnt/Ca 2+ and Wnt/Ror2 signaling pathways, respectively 27 . In contrast to increasing Ror2 pathway expression in AdWnt5a-infected cells, RhoA and Dvl remained unchanged (data not shown). The receptor tyrosine kinase Ror2 has been shown to act as a receptor or coreceptor for Wnt5a to mediate Wnt5a-induced activation of the Wnt/JNK pathway and inhibition of the β-catenin-dependent canonical Wnt pathway [37][38][39] . So we focused on the Wnt/Ror2 signaling pathway in this project. We detected the expression of JNK1, JNK2 and c-JUN and revealed that Wnt5a could transiently increase expression of these 3 mRNAs in a short time (in the 24 h after AdWnt5a infection) and revert back to baseline levels by 48h. Our data are consistent with the result of Nomachi A. et al. that Wnt5a induces the activation of JNK in a Ror2-dependent manner 37 .
Our laboratory has recently shown that Wnt3a, a typical canonical Wnt pathway molecule, promoted melanogenesis of melanocytes via the up-regulation of the expression of MITF, tyrosinase and TRP1 17,18 . Since Wnt5a generally functions by antagonizing the canonical Wnt signaling pathway 40 , we tested whether Wnt5a might inhibit melanin synthesis by suppressing the Wnt3a mediated canonical signaling pathway. Since β-catenin is the key downstream mediator of canonical Wnt signaling, we examined β-catenin expression levels when melan-a cells were infected with AdWnt3a or co-infected with AdWnt5a and AdWnt3a. The results were consistent with our previous observation that Wnt3a activated canonical Wnt signaling by up-regulating β-catenin expression 17 and, Wnt5a reversed this response. In support of our findings, Topol L. et al. have reported that Wnt5a antagonized the canonical Wnt pathway by promoting the degradation of β-catenin 40 . To investigate the effect of Wnt5a on melanin synthesis stimulated by Wnt3a signaling, we analyzed the expression of tyrosinase and TRP1 in co-infected cells compared with AdWnt3a-infected cells. The data revealed that Wnt5a inhibited the expression of tyrosinase and TRP1 enhanced by Wnt3a suggesting that Wnt5a could inhibit melanin synthesis by suppressing canonical Wnt signaling in melanocytes.
In summary, we demonstrate that Wnt5a can activate Wnt/Ror2 signaling and suppress canonical Wnt signaling and thereby inhibit the proliferation and melanin synthesis of melan-a cells. | 2014-10-01T00:00:00.000Z | 2013-04-05T00:00:00.000 | {
"year": 2013,
"sha1": "ce543101c764ece74cfbff81e303a777a612b86a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.medsci.org/v10p0699.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "ce543101c764ece74cfbff81e303a777a612b86a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53089541 | pes2o/s2orc | v3-fos-license | Contamination of Household Open Wells in an Urban Area of Trivandrum, Kerala State, India: A Spatial Analysis of Health Risk Using Geographic Information System
Objective: To assess the sanitary condition and water quality of household wells and to depict it spatially using Geographic Information System (GIS) in an urban area of Trivandrum, Kerala state, India. Study design: A community-based cross-sectional census-type study. Methods: Study was conducted in an urban area of Trivandrum. All households (n = 449) residing in a 1.05 km2 area were enrolled in the study. Structured questionnaire and Differential Global Positioning System (DGPS) device were used for data collection. Water samples taken were analyzed in an accredited laboratory. Results: Most of the wells were in an intermediate-high contamination risk state, with more than 77% of wells having a septic tank within 7.5 m radius. Coliform contamination was prevalent in 73% of wells, and the groundwater was predominantly acidic with a mean of 5.4, rendering it unfit for drinking. The well chlorination and cleaning practices were inadequate, which were significantly associated with coliform contamination apart from a closely located septic tank. However, water purification practices like boiling were practiced widely in the area. Conclusion: Despite the presence of wells with high risk of contamination and inadequate chlorination practices, the apparent rarity of Water-borne diseases in the area may be attributed to the widespread boiling and water purification practices at the consumption level by the households. GIS technology proves useful in picking environmental determinants like polluting sources near the well and to plan control activities.
Introduction
Water in its diverse forms constitutes the major component in cellular to the inanimate global level. Only about 3% of the total water available on earth is freshwater, of which 68% is groundwater and 30% surface water. 1 In developing countries, 90% to 95% of all sewage and 70% of all industrial wastes are dumped untreated into surface waters. 2 Approximately 22% of the freshwater found at the Earth's terrestrial surface is stored as groundwater. 1 Groundwater maintains its quality by the natural filtering mechanisms inherent to the soil strata by virtue of gradation in the physical parameters of soil and rock as water seeps downwards. 3 Alarmingly though, with the exponential growth in human activities allied to urbanization and industrialization, the groundwater sources are now facing threats of contamination.
Groundwater contamination, be it physical, chemical, or microbiological, can cause a myriad of health hazards. At present, 1.1 billion people are drinking water that is not safe, especially among the developing countries, contributing to millions of young deaths. 4 An estimated 2.6 billion people lack adequate sanitation globally. 5 Those most susceptible to water-borne illnesses are children, elderly, pregnant women, and immunocompromised individuals. Water-borne illnesses are one of the five leading causes of death among children under the age of 5 years. Approximately 5000 people die every day from waterborne diarrheal illnesses. 6 It is estimated that around 37.7 2 Environmental Health Insights million Indians are affected by water-borne diseases annually, 1.5 million children are estimated to die of diarrhea alone, and 73 million working days are lost due to water-borne disease each year. The resulting economic burden is estimated at US$600 million a year. 7 It is well established that microbial contamination imposes immediate disease burden and chemical contamination causes chronic diseases. Acute diarrheal diseases constitute the bulk of the immediate disease burden, whereas slow accumulating chemical toxic conditions like fluorosis constitute chronic diseases. The Goal 6 of Sustainable Development Goals refers to "Ensure availability and sustainable management of water and sanitation for all," which also addresses to the quality of available water. 8 The provision of clean drinking water has been given priority in the Constitution of India, with Article 47 conferring the duty of providing clean drinking water and improving public health standards to the State. 9 Broadly speaking water is defined as unfit for drinking as per Bureau of Indian Standards (BIS), IS-10500-2012, if it is bacteriologically contaminated, or if chemical contamination exceeds maximum permissible limits. 10 Access to safe drinking water depends not only on the quality of water at source but also on contamination throughout its way to the user and practices related to purification and sanitation. The global volume of waste disposed of via latrines is approximately 800 million gallons per year, almost all of which is disposed in the subsurface, making latrines the leading contributor to the total volume of waste discharged directly to groundwater. 11 Assessment of water is therefore very crucial to safeguard public health and the environment.
In India, groundwater resources are widely used for drinking and domestic purposes. Estimates indicate that surface and groundwater availability is around 1869 billion cubic meters (BCM). Of this, 40% is not available for use due to geological and topographical reasons. 12 Groundwater has been extensively exploited in India to meet the demands of its population with dug wells serving as the most common source of extraction of groundwater. In Kerala, the groundwater caters to 80% of the rural and 50% of the urban communities for their drinking and domestic needs, 6 mostly by means of dug wells and rarely via bore wells. 17 Kerala is endemic for Water-borne diseases like enteric fever and viral hepatitis apart from acute diarrheal diseases, all of them showing seasonal trends, with aggravation in summer. Cases of cholera have also been reported from within the state. 13 Geographic Information System (GIS) is a powerful software tool for the manipulation and analysis of spatial data, making maps into dynamic objects. 14 GIS is a useful tool that aids and assists in health research for the control and prevention of diseases and epidemics. Majority of data in public health has a spatial (location) component, to which GIS adds a powerful graphical and analytic dimension by bringing together the fundamental epidemiological triad of person, time, and the often-neglected place. The attribute of location proves to be useful in the estimation of groundwater contamination as it gives valuable inputs regarding distance of well from nearest contamination source, also taking into account the geological profile of the area.
GIS technology is a relatively new public health tool to study health-related events in Kerala, and its vast potential of opportunities remain hidden to the public health experts of the state. The Department of Community Medicine, Government Medical College Trivandrum in association with Inter University Centre for Geospatial Information Science and Technology, University of Kerala has embarked on this venture to assess the contamination status of household wells in an urban area of Trivandrum and to map the same using GIS.
Methods and Materials
A community-based cross-sectional study was conducted in the smallest defined health care unit in the community known as "sub-center." This study was conducted in Pangappara Public Health sub-center area which is the field practice area of the Government Medical College at Trivandrum, India. The study region covers an area of 1.05 km 2 and geographically lies between 76°53′50.36″E to 76°55′1.96″E longitudes and 8°33′45.83″N to 8°32′57.36″N latitudes ( Figure 1). An integrated approach has been followed in this study, which involves four steps. The study started with the collection of data by questionnaire survey at each house and collecting geographical coordinates of each house, well, and nearest septic tank by using Trimble Differential Global Positioning System (DGPS). It was followed with another fieldwork for collecting groundwater samples for estimating major ions, pH, electrical conductivity, total dissolved solids, and certain trace elements by inductively coupled plasma mass spectrometry (ICPMS) analysis. The generated hydrogeochemical data and the health data were integrated to create the geo database in GIS platform. The final step involved the analysis of the data using GIS tools and statistical software.
The study area has 540 individual houses and 4 multi-storied buildings which houses 274 apartments in it. The study region covers an area of 1.05 km 2 . All individual houses were included in the study and no sampling technique was adopted.
Data collection period was from February 1 to April 30, 2016, which is the summer season in Kerala, where waterborne diseases are most common. Data collection was conducted jointly by a team of postgraduates from the Department of Community Medicine and researchers from Inter University Centre for Geospatial Information Science and Technology (IUCGIST), wherein data collection and geospatial mapping were done simultaneously. Study population comprised all the households residing in the area, enrolled by census method. Details were obtained from an adult member of the household present at the time of data collection. Dwellings found closed/ uninhabited were visited on three separate days at different Ananth et al 3 times of the day. Those houses still found to be locked were excluded from the study, which ultimately gave a study size of 449 households.
The study tool consisted of a structured questionnaire which had domains relating to the demographic profile of the household members, their water usage patterns, and sanitary survey (taken from the National Rural Drinking Water Quality Monitoring and Surveillance Program, Government of India). 7 Water usage pattern part had variables relating to the type of water sources, daily usage, and household practices concerning well chlorination, well cleaning, and boiling and filtering of well water. The sanitary survey included a visual inspection of the household wells and its surroundings. Apart from details pertaining to the well, that is, its built-up, dimensions, parapet, drainage, floor, lining, cover, and mode of drawing water, the survey also took into consideration the distance from the nearest septic tank and any other nearby contamination sources. The sanitary survey using the validated questionnaire 7 gave a contamination risk score for each well, which categorized the well into low risk, medium risk, high risk, and very high risk based on the contamination risk scores.
The location of each house, its well, and its distance from the nearest septic tank were recorded using hand-held Differential Global Positioning System (DGPS). Later, the Euclidian distance between each well with nearest septic tank was calculated using ArcGIS software. These data were later mapped in the GIS platform. The advantage of GIS mapping with the aid of Differential Global Positioning System is that the distance from a well to the nearest septic tank could be mapped very accurately, even if the nearest septic tank to the well belongs to the adjacent household, wherein manual measurements would be difficult.
In addition, water samples were taken from selected wells, which were used for drinking, taking into consideration the geology and geomorphology of the area. Samples were not taken from the corporation-supplied water or from the six bore wells present in the area. A total of 50 water samples were taken for physical and chemical quality analyses and 30 samples for bacteriological analysis from the 1.05 km 2 area. Water samples for analysis were directly drawn from the well using the same rope and bucket brought by the data collection team. Water quality parameters pH, total dissolved solids (TDS), electro-conductivity (EC) and temperature were taken on the spot and recorded using EUTECH water quality testing portable meter (Cyber-Scan series 6000). Water samples were collected in 1000 mL bottles and transported to the laboratory as soon as possible within 2 hours of sample collection. Inverse distance weighted (IDW ) interpolation method in ArcGIS is used to create continuous raster surface of these parameters. Further heavy metal concentrations in the water samples were analyzed using inductively coupled plasma mass spectroscopy (ICPMS) through Merck multielemental standard and ICPMS Thermoscientific iCAP™ Qc instrument.
Results and Discussion
The study area had a population of 1649 individuals, with 810 men and 839 women. The mean age of the population was 37.72 ± 21.5 years. Among the households, 83% (n = 372) were above poverty line (APL) families. The map of the study area is given as Figure 1. Majority of the households used dug wells as the prime source of drinking water (73%) in the study area, followed by the corporation-supplied tap water. A very few households used bore wells. The location of the dug wells was also mapped using DGPS.
The sanitary assessments of each of the 354 household wells were done, with the 11-point validated questionnaire, which gave contamination risk score for each well. The distance from the nearest septic tank to each well was measured using DGPS, which showed a median of 13.40 ± 12.77 m, with a minimum of 1 m and maximum of 139 m. Here, the septic tank may be of the adjacent household, not necessarily within the same household boundary. This was then mapped, as shown in Figure 2. The sanitary survey results are depicted in Table 1.
The net contamination score from the survey was used to predict the contamination risk for each of the well. Most of the wells (39%) in the area had intermediate risk for contamination, followed by high risk (31%) of contamination (Table 2 and Figure 3).
The water purification practices of the households were assessed, and majority (91.8%) of them boiled water before consumption. A few households filtered the water without boiling, and a very few used no purification techniques. Well chlorination frequency and well cleaning frequency were also enquired, which showed a dismal result. The mean chlorination frequency was once in 8.46 ± 11.60 months, whereas that of well cleansing was 13.24 ± 14.12 months.
Out of the 30 samples that collected for bacteriological analysis, 22 were positive for Coliforms (both Total Coliforms and Fecal Coliforms), as mapped in Figure 4. The chemical analysis of open well water revealed the low pH or acidic nature of the groundwater (Median: 5, 40) as compared with the BIS for drinking water. The groundwater pH pattern of the area was mapped using GIS and is given as Figure 5.
Ananth et al 5
Other chemical parameters were within the normal limits. The presence of trace elements/heavy metals was also tested in 10 samples from the area. The sample size was restricted owing to financial constraints in performing heavy metal chemistry analysis (ICPMS). The results, however, were within normal limits except for aluminum and copper (Table 3).
Inadequate chlorination is found to be significantly associated with coliform contamination in household wells when independent t-test was applied (P value <.05). Similarly, bivariate analysis using independent t-test showed a significant association between coliform contamination in wells with the closely situated septic tanks (P value <.05). Wells that were inadequately cleaned were also found to be significantly associated with the contamination by coliforms (P value <.05; Table 4). A Multiple regression model is used to predict the coliform contamination in the household wells by using contamination risk score, septic tank proximity, frequency of chlorination and cleansing. However the level of significance was not sufficient. In the regression analysis, the coliform count is taken as dependent variable, and proximity of septic tank to well, frequency of chlorination in months, frequency of well cleansing in months, and contamination risk score are taken as independent variables. The derived regression equation could be written as follows
Discussion
This study was conducted to assess the sanitary conditions and groundwater contamination of an urban area in Trivandrum and to map the findings spatially using GIS. Deviating from the popular notion that most of the urban households in Kerala depended on Kerala Water Authority (KWA)-supplied tap figure 5. Spatial distribution of well water pH of the study area. Green color depicts a pH close to the normal recommended range and orange to red shows highly acidic well water. 15 this study showed that most of the households in the study area depended on well water for domestic and peridomestic purposes. Kerala has emerged in a big way from the 1980s, when half of the households in Kerala had no protected water supply and latrines. 16 In this study, the entire households had access to some form of water supply, and all of them had latrines. The sanitary condition of the wells though was inadequate in about half of the study units in this study, thus making them at risk for contamination. Although the sanitary condition of the wells as such was not remarkable, almost all of the households practiced some kind of water filtration/purification method.
Bacteriological contamination in the form of coliform organism poses a serious public health threat. Coliforms can cause a wide range of water-borne diseases ranging from diarrhea to urinary tract infections. Diarrhea is the leading cause of death in children under 5 years around the globe. A study conducted by Centre for Water Resources Development and Management (CWRDM), Kozhikode, indicates that 70% of the drinking water wells of Kerala have fecal contamination, which is at par with the results of this study (73.7%). 6,13 The cause of contamination is attributed to close proximity of latrines to wells, unhygienic usage of the wells, utilizing the adjacent area of wells for waste disposal and other purposes, thus making the water unfit for use, resulting into water-borne disease. The permitted minimum distance from a dug well to the septic tank is 15.24 m according to United States Housing and Urban Development, which is adopted by most of the developed countries around the globe. 18 In chapter 16 of Kerala Building Rules, the minimum distance between a well and a septic tank is fixed as 7.5 m. 19 Poor town planning and dilapidated well infrastructure add onto the bacteriological contamination. The distance from the nearest septic tank was measured here, which need not be the septic tank of the same household, owing to the thickly populated study area. The distance from the well to the septic tank has significant negative correlation in an earlier study 19 as well as in this study. Also, a quarter of the households had distance between well and the septic tank less than the prescribed standards set for urban areas of Kerala. Separate maps were made, which depict proximity of wells with the nearest septic tank and distribution of coliform contamination. The well chlorination and well cleaning practices by the households were also grossly inadequate and have shown a significant association with the coliform contaminations in the study area.
There are many studies about the physicochemical and bacteriological quality of groundwater in Kerala. The pH of water that is supplied by KWA has a mean ± SD of 7.17 ± 0.58 as obtained from a previous state-wide study. 19 A study conducted by the Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore 19 states that the groundwater in some areas of Trivandrum has a low pH as per World Health Organization (WHO) and BIS standards. This study also yielded similar result, but with an even lower pH range. At the same time, very few samples were within normal pH range. All other chemical parameters were within normal limits. Districts like Palakkad and Alappuzha had foci of chemical contamination of groundwater resulting in hard water, but the district of Trivandrum was relatively free of chemical or heavy metal contamination 20 except pH. Heavy metal presence was also searched for, but all metals except Al and Cu found in water samples were within normal limits. A study showing the relationship between aluminum and pH points to the risk of developing chronic diseases like arthritis, osteoporosis, and cardiovascular diseases owing to the consumption of long-term acidic water. 21 Aluminum toxicity is studied to cause encephalopathy, Alzheimer diseases, renal toxicity, and osteoporosis, 20 whereas copper in excess can be an irritant in an acute exposure and can lead to renal disorders in a long term. 22
Summary and Conclusions
Dug wells still remain as a major source of drinking water in the study area. But, the sanitary conditions of the wells are a matter of concern, with 73% of the wells contaminated with coliform organisms. The reasons for the coliform contamination brought out from the study are the close proximity of septic tanks with the wells and inadequate chlorination and cleansing activities by the households. The pH of groundwater is also found to be low as per BIS and WHO standards.
9
With all these risk factors prevalent in the study area, waterborne diseases are probably kept at bay by the widespread boiling and filtering practices adopted by the households. Nevertheless, the study area is at high risk of water-borne epidemics, and urgent remedial measures must be taken. Initiatives with the active participation of the community like a mass cleanliness campaign, well chlorination campaign, and awareness and health education campaigns may be conducted as maintaining a proper sanitary well is the most cost-effective preventive measure against water-borne diseases. Such campaigns have to be supplemented with regular practice, monitoring, and evaluation of effectiveness of chlorination. Addition of lime can help in neutralizing the acidic well water. Also, house building rules should be made more stringent to reduce potential contamination of well. In urban areas, where the land is limited, KWA-supplied water use may be encouraged and polluting sources should be identified and remedial measures be taken accordingly.
Novelty of the study
Although there are few previous research works on the groundwater quality in Kerala, there is no spatial study that discusses the household groundwater contamination and health risk factor done in GIS platform. Geographic Information System proves to be an efficient tool in analyzing nearby contamination sources of a well, which may otherwise be difficult to assess owing to the physical barriers. | 2018-11-11T01:39:44.617Z | 2018-10-23T00:00:00.000 | {
"year": 2018,
"sha1": "67cab5d11e5edcafae241e6d069ea1201143c602",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1178630218806892",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67cab5d11e5edcafae241e6d069ea1201143c602",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
270679893 | pes2o/s2orc | v3-fos-license | Uncharted Territories: Dynamic Hip Screw Migration Into the Pelvis Requiring Laparotomy
Hip fractures are common in patients with poor bone quality and are seen to affect the elderly and frail population. We report a case of implant failure after fixing an unstable intertrochanteric fracture with a dynamic hip screw (DHS). The patient presented with a DHS that had migrated into the pelvis approximately six months after surgery. Plain radiographs showed migration of the DHS through the acetabulum and into the pelvis. Migration of DHS into the pelvis is an extremely rare complication and has only been reported a few times. A 71-year-old man presented with a fall and confusion. The patient reported having a fall but could not recall the exact events. Past medical history included Alzheimer's dementia, osteoporosis, left total hip replacement, right DHS, peripheral neuropathy, and recurrent falls. He had undergone reduction and fixation of a right intertrochanteric fracture with DHS implant via direct lateral approach six months before hospital admission. On examination, he had right-sided hip pain and was unable to straighten leg raise. His abdomen was soft and non-tender, with no distension or palpable masses. Neurovascular status was normal, and no signs of infection were detected. On the anteroposterior radiograph, the implant seemed to have migrated through the acetabulum and into the abdomen. A CT of the abdomen and pelvis was performed to identify any visceral injuries (negative) and for surgical planning. The patient underwent a midline laparotomy to remove the implant. Although the exact reason for the implant failure is unknown, the migration of an unbroken hip screw into the abdomen and pelvis requiring laparotomy has not been reported in literature.
Introduction
Because of the aging population, hip fractures are becoming more prevalent, and, within one year of operation, have shown to have high mortality rates [1,2].Poor bone quality and fragility are significant factors for hip fractures, and those patients with osteoporosis are more frequently affected [3].Failure of dynamic hip screws (DHS) has been previously reported in the literature, and very few cases have been reported of screw migration into the pelvis.DHS retrieval by laparotomy, to the best of our knowledge, has never been reported in the literature.
Case Presentation
A 71-year-old man presented with a fall and confusion.The patient had reported having a fall but could not recall the events of the fall or when it happened.The ambulance crew had reported that residential home staff notified that the patient had been bedridden for the last two days.He had recently been suffering from urinary retention, and his urine had been dark and foul-smelling.The patient began developing hallucinations and episodes of confusion.The patient had a past medical history of Alzheimer's dementia, osteoporosis, bilateral knee replacements, left total hip replacement, peripheral neuropathy, and recurrent falls.He had undergone reduction and fixation of a right intertrochanteric fracture (Figure 1) classified as a type A1.2 fracture according to AO classification [4].The fracture was fixed with a DHS implant via a direct lateral approach six months before hospital admission (Figures 2-4) and used a Zimmer frame to mobilise in respite care.On examination, the findings of his right lower limb examination were abnormal as he was unable to perform a straight leg raise.His abdomen was soft and non-tender, with no distension or palpable masses.The patient was neurovascularly intact and haemodynamically stable.No evidence of infection on examination was detected.
FIGURE 4: Intraoperative images of fracture fixation with a dynamic hip screw (DHS) (lateral view)
On the anteroposterior radiograph, the implant seemed to have migrated through the acetabulum and into the pelvis/abdomen (Figure 5).A CT of the abdomen and pelvis was performed to identify any visceral injuries (negative) and for surgical planning.A further CT angiogram was conducted and showed no evidence of a pelvic haematoma or active extravasation.The angiogram was also performed to ensure that the metalwork was not near any viscera or neurovascular bundles.
FIGURE 5: AP view X-ray showing migration of the dynamic hip screw (DHS) into the pelvis
After consultation with the general surgical and vascular teams, a midline laparotomy was performed to remove the implant.The screw remained covered by the peritoneum, and no visceral injury was identified.Despite antibiotic care following the operation, the patient developed intra-abdominal sepsis.This was seen as a postoperative complication, possibly because of wound infection, and the patient died in the hospital on day six following admission.
Discussion
Intertrochanteric fractures have historically been categorised by the Evans classification, which are fractures that occur between the lesser and greater trochanters of the hip [5].This classification was introduced in 1949 by Dr. Alfred G.Evans and helps guide treatment decisions and assess fracture stability to predict outcomes.The Evans classification divides intertrochanteric fractures into stable (type 1) and unstable fractures (type 2).Some key features of type 1 fractures include minimal displacement of the fracture, bone fragments being well aligned and stable, and the medial cortex remaining intact.Usually, these fractures can be treated with methods such as internal fixation.Type 2 fractures tend to show significant displacement, are often comminuted fractures or reverse oblique fractures, and the medial cortex is disrupted, leading to instability [5].For these fractures, treatment is usually done with intramedullary nailing or other fixation techniques.
Since its establishment, the Evans classification has expanded to include other subtypes to further group fracture patterns and their implications.For example, type 2 fractures can be further grouped based on the degree of comminution and specific fracture patterns.This helps surgeons select the most appropriate treatment option to help with patient outcomes and reduce complication rates.Furthermore, with intertrochanteric fractures, a measurement called tip apex distance (TAD) is crucial in orthopaedic surgery.It is important in fixations with a DHS or intramedullary nail.TAD is defined as the sum of distances from the tip of the screw to the apex of the femoral head in both anteroposterior and lateral radiographs [6].This is an important measure for many reasons: a smaller TAD is associated with lower risks of screw cut-out, which is where the screw migrates out of the bone; a TAD less than 25 mm significantly reduces the risk of mechanical failure; TAD offers a standardised way to evaluate surgical fixations across numerous studies and cases, facilitating optimising techniques in surgery and patient care [6].
Our patient had features of a displaced type 1 fracture and was deemed suitable for internal fixation with a DHS.Intraoperative images were obtained, but because of their magnification, it was difficult to measure the true TAD in this case.
Several types of DHS failure have been reported in the literature, including hip screw breakage [7], bending of the hip screw at the screw-barrel interface without breakage [8], and breakage in the barrel of the plate and bending in the hip screw [9].Recent studies show DHS failure rates of approximately 6.8% [10].As noted by Spivak et al. [11], a DHS can fail in two ways.The first way is by the low-stress fatigue failure of the device, related to the design of the screw, including the length of the barrel and the internal threaded region.The second type of implant failure is by high-stress loading, usually observed in the nonunion of the intertrochanteric area.Migration of an unbroken hip screw into the abdomen and pelvis is very rare and has only been reported a few times in literature.To the best of our knowledge, the retrieval of a DHS requiring laparotomy has not been reported in the literature.
Conclusions
Since the development of the DHS, migration into the pelvis has been a very rare complication and has only been reported a few times in literature.Furthermore, retrieval of DHS using a laparotomy has never been recorded.The recognition of such events and compliance with surgical techniques, especially when operating on elderly, osteoporotic patients, can help avoid such complications. | 2024-06-23T15:11:28.298Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "52c2904db8ec341761ac87af2ad74d7076419727",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/257178/20240621-29363-1hu5cjj.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13ae499718f150b44aee9a1f2e42988d8dfd3a24",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
4586382 | pes2o/s2orc | v3-fos-license | Kinship and Social Behavior of Lowland Tapirs (Tapirus terrestris) in a Central Amazon Landscape
We tested the hypothesis that tapirs tolerate individuals from adjacent and overlapping home ranges if they are related. We obtained genetic data from fecal samples collected in the Balbina reservoir landscape, central Amazon. Samples were genotyped at 14 microsatellite loci, of which five produced high quality informative genotypes. Based on an analysis of 32 individuals, we inferred a single panmictic population with high levels of heterozygosity. Kinship analysis identified 10 pairs of full siblings or parent-offspring, 10 pairs of half siblings and 25 unrelated pairs. In 10 cases, the related individuals were situated on opposite margins of the reservoir, suggesting that tapirs are capable of crossing the main river, even after damming. The polygamous model was the most likely mating system for Tapirus terrestris. Moran's I index of allele sharing between pairs of individuals geographically close (<3 km) was similar to that observed between individual pairs at larger distances (>3 km). Confirming this result, the related individuals were not geographically closer than unrelated ones (W = 188.5; p = 0.339). Thus, we found no evidence of a preference for being close to relatives and observed a tendency for dispersal. The small importance of relatedness in determining spatial distribution of individuals is unusual in mammals, but not unheard of. Finally, non-invasive sampling allowed efficient access to the genetic data, despite the warm and humid climate of the Amazon, which accelerates DNA degradation.
Introduction
The ability of individuals to change their behavior based on the recognition of kin is an important characteristic in the evolution mammalian social systems [1,2]. For instance, individuals can form philopatric social groups based on kinship or disperse from the natal home range. The way individuals behave with respect to related individuals will affect how genetic diversity is distributed in space [3]. Building an understanding of how different species behave towards kin is crucial to our investigation of the evolutionary causes of mammalian social behavior [4].
A common social outcome mediated by kin recognition in mammals is the formation of philopatric social groups, in which a number of closely related individuals remain together at or near their natal site and display cooperative behavior [5]. Surprisingly, and so far little explored, philopatric social groups are found not only in gregarious species but also in solitary ones [6][7][8]. In solitary species, while there is a significant overlap between home ranges, individuals perform their daily activities alone [6]. This helps explain why such behavior has attracted little attention, as direct interactions occur infrequently, thus making it difficult to carry out observational studies. Observational studies are often further complicated by the structure of the environment and the species' activity pattern, which can further decrease the probability of observing interactions in situ.
To overcome the issues with direct observation, indirect approaches based on home range overlap have been used as a measure of sociality in solitary species [9]. Nevertheless, this still requires capturing, radio-collaring, and monitoring several individuals over a suitable time scale. However, large mammals are not easy to capture. An alternative approach is provided by analysis of genetic data obtained from non-invasively collected samples. Such samples can be used to identify individuals and infer the degree of relatedness among them [10]. By examining the spatial distribution of relatedness we can obtain valuable insights into animal behavior, which do not require capture and manipulation of animals [11][12][13].
Here, we present genetic data on the lowland tapir (Tapirus terrestris), a large solitary mammal, in order to test the hypothesis that lowland tapirs exhibit social behavior based on relatedness. The little we know about social behavior in this species comes from observation of overlap in home ranges, radio-collar data, and anecdotal accounts. Substantial home range overlap between individual tapirs has been found in a number of studies encompassing a range of different biomes [14][15][16]. Medici [16] did not find significant differences in percentage home range overlap across the three possible gender pairs: 43.2% for male-male, 33.4% for female-female and 34.9% for male-female range overlap. The extent of home range overlap suggests that T. terrestris may display some sort of social behavior (as defined in Waser & Jones [6]).
In addition to home range overlap, indications that tapirs display territorial behavior are provided by movement data and exclusion behavior. Tobler [15] found that individuals regularly walked along the borders of their ranges, possibly monitoring a territory. With respect to exclusion behavior, resident T. bardii individuals were observed attacking newly translocated individuals, suggesting territorial defense [17]. Tapirs may also use latrines as a way of marking territory boundaries, a common behavior among mammals [18,19] (but see Ralls [20] and Rostain et al. [21] for alternative explanations for latrine use). The evidence of territorial behavior and home range overlap suggests that tapirs can recognize different individuals, reinforcing the possibility that this species exhibits social behavior.
Mating systems are also often intimately associated with social behavior and may influence the degree of territoriality in a species [6]. Currently, we lack data on the tapir's mating system. Observations reported by C. R. Foerster in studies of T. bairdii indicate that tapirs are likely facultative polygynous [16]. Overlap in territory among related females is expected under polygyny [6]. Hence, polygyny can lead to increased spatial autocorrelation in genes at small spatial scales relative to broader spatial scales [22].
The apparent capacity to change behavior based on individual recognition lead us to suspect that kin recognition may influence patterns of interactions in T. terrestris. We thus hypothesized that tapirs tolerate individuals from adjacent and overlapping home ranges if they are related. Based on this hypothesis we expect to find pairs of related individuals geographically closer than pairs of unrelated individuals. This hypothesis was proposed by Medici [16], based on both personal communications with C. R. Foerster, and the study of Tobler [15], in which a male and female, likely sibs, were observed sharing their parent's home range. According to C. R. Foerster [16], tapirs form family units, in which there are extensive home range overlaps between related individuals and non-related individuals are excluded. Our study is the first, to our knowledge, to test this hypothesis using genetic information.
Thus, the objectives of the present study were: (1) to analyze the spatial distribution of related individuals of T. terrestris; and (2) determine the species mating system.
Study area
The Balbina hydroelectric dam was flooded in 1987 and is located approximately 150 km north of the city of Manaus (Amazonas state, Brazil). Due to the flat topology, the reservoir has extended over 2360 km 2 creating over 3500 islands [23]. To offset the environmental impact caused by the dam the Brazilian government created the Uatumã Biological Reserve in 1990 (0u50' to 1u55' S, 58u50' to 60u10' W). The Uatumã reserve is predominantly composed of continuous forest and its buffer zone includes the lake and island formations. The area is dominated by dense tropical rainforest with an average tree height of 30 meters [23].
Preliminary surveys conducted in the islands of the Uatumã Biological Reserve suggested a high density of tapirs (M. Benchimol personal communication). This was in stark contrast to our previous experience in continuous forest in the Jaú National Park (AGS), where tapir densities were low and dung samples were rare. The observation of greater densities of tapirs in seemingly disturbed habitat is not uncommon [24]. Also, the distribution across islands facilitates fieldwork, as a larger area can be covered by boat than what would be feasible on foot in the jungle. Furthermore, the footprints in the margin of the islands were important indicators of the presence of tapirs and, therefore, provided ample opportunities to collect dung.
The ability of tapirs to swim, the relatively small distance between adjacent islands (see Figure 1), the high tapir density and the logistical efficiency suggested that it would be more feasible to study tapir behavior on the islands than in areas of contiguous forest. Therefore we selected groups of islands (islands in close proximity to one another) to be sampled intensely in order to capture all tapirs, and ascertain that we have described all possible local relatedness connections. To cover a greater portion of the reservoir and have a representative sampling, we surveyed several groups of adjacent islands in different areas of the reservoir ( Figure 1). We used Landsat TM5 satellite images from 2008 analyzed with ArcGIS 9.3 [25] to assist in the selection of target islands. A total of 48 islands were visited over a period of 55 days predominantly in the dry seasons of 2009 and 2010. Sampling was carried out under permit no. 21320-1 issued by the Instituto Chico Mendes de Conservação da Biodiversidade (ICMBio/MMA).
Sample collection
A small aluminum boat with an outboard engine was used to circle the islands in search for recent signs of tapir activity and feces in the water. The number and age of footprints on the margin were determinants in the choice of landing sites for each island. We found a positive relationship between footprint density and the possibility of encountering latrines. Each landing lasted one hour, with three researchers actively looking for latrines or feces.
We only collected feces with a short exposure time (1 to 5 days). Disposable scalpels were used to scrape off approximately 1 ml of the fecal pellet surface. The sample was then placed in a Falcon tube (15 ml) containing 5 ml of Longmire buffer [26] or 10 ml of absolute ethanol. Sample and preservation buffer were thoroughly mixed by inversion to ensure a uniform mixture. The samples were maintained at ambient temperature throughout the fieldwork (maximum of 15 days) and preserved at 280uC in the laboratory.
DNA was isolated using the QIAmp DNA Stool Mini Kit (QIAGEN), following the manufacturer's protocol with the following modifications: (1) the initial amount of the sample was increased to 500 ml and (2) the final elution volume was decreased to 40 ml, as recommended by other authors [27,28].
We used microsatellite markers previously developed for T. terrestris. Fourteen microsatellite loci were tested on blood samples from two captive tapirs held at the Mantenedor da Fauna Cariuá facility (Cadastro Técnico Federal-National Registry Number: 671958). Blood was drawn during scheduled veterinary care procedures by a registered vet (Laerzio Chiesorin Neto, CRMV 0284/AM) following standard procedures approved by the Brazilian Regional Veterinary Council, and the IUCN/SSC Tapir Specialist Group's Veterinary Committee (http://www. tapirs.org). The Animal Ethics Committee at INPA does not require prior approval to conduct sampling if it is deemed 'prophylactic or for other veterinary care'. All care was taken to ensure that no animals suffered during the development of this study.
Non-invasive samples are generally characterized by high rates of genotyping and amplification errors [34][35][36]. To ensure we only used high quality genotypes in analyses we used the multiple-tube approach [37]. In the multiple-tube approach, the genotype at a locus is determined by consistent observation of alleles across multiple PCRs. The exact number of PCRs largely depends on the amount and quality of DNA that can be obtained from the fecal samples, and the resources available to repeat PCR reactions. Taberlet et al. [37] suggest that an initial three positive PCRs be performed, and if ambiguity persists, another four PCRs should be carried out.
We modified the Taberlet et al. [37] approach in order to fit with our budget, sample, and laboratory constraints. We included in the final dataset only genotypes that were observed at least three times in a maximum of seven PCRs per marker per sample. Three positive PCRs with consistent genotypes was the minimum required by Taberlet et al. [37] to achieve 99% confidence in the observed genotype when the genotype is heterozygous. We also propose that the mislabeling of heterozygous individuals as homozygous was not a significant error in our dataset because we: (1) did not detect departures from Hardy-Weinberg proportions and null alleles; (2) re-captured genotypes in close geographic proximity; and (3) observed and expected heterozygosities did not differ significantly from other studies (see Results section).
We found that DNA degraded rapidly after isolation, thus, to obtain three consistent amplifications, an average of three extractions were needed per sample in order to obtain sufficient DNA for all the PCRs. Extractions were not pooled, but instead were performed as needed following PCR amplification failures. Genotypes from samples that did not amplify in five consecutive PCRs per primer were discarded.
The DNA extraction, PCR preparation and PCR handling prior to genotyping were all performed in different laboratories in order to avoid contamination. Sample preparation for genotyping was the only step carried out in a laboratory in which samples from other animals were present. PCR mixtures were prepared in a PCR hood sterilized with ultraviolet light. Positive and negative controls were included as part of each PCR batch and negative controls were included in the DNA extraction step. If the negative control was positive in the PCR, the control was genotyped and, if successful, the samples with similar alleles to the negative control were discarded.
Data analysis
The genotypes were analyzed for the presence of null alleles, allelic dropout and stuttering using the program MICRO-CHECKER v2.2 [38]. To measure the statistical power of the primer set for individual identification we calculated the probability of identity (P (ID) ), which is the probability that two non-related individuals have the same genotype in a population [39]. The P (ID)unbiased and the P (ID)sib were calculated using GIMLET v1.3.3 [40], correcting for the number of sampled individuals and the possibility of sampling related individuals [41]. ARLEQUIN v3.5 [42] was used to estimate heterozygosity and test for deviations from Hardy-Weinberg proportions and linkage equilibrium. Bonferroni correction [43] was applied to adjust statistical significance across multiple tests.
To test our hypothesis it was necessary to first establish the number of distinct genetic units sampled in the Balbina reservoir. The number of units was estimated using STRUCTURE V. 2.2 [44]. We allowed for the possibility of admixture and considered the allelic frequencies correlated between genetic units. One to four populations (K) were tested and a priori we considered all K to be equally likely. We did not use location information to inform the prior (i.e., uninformative prior). For each K, we ran 10 chains each with 10 6 iterations, with the first 10 5 iterations discarded as burnin. The most likely K was inferred by maximizing the loglikelihood of the data given K. Convergence of the MCMC was assessed by visual inspection of chains within STRUCTURE, and by comparing results across multiple runs of STRUCTURE. We were satisfied that convergence was achieved when we did not observe any trends in the chains, and that results across chains were largely comparable. It is hard to assess with complete confidence that convergence has been attained, but it is usually easy to determine that convergence has not been reached [45]. In our approach, we reduce the uncertainty about the effect of the starting values on the final outcome, and we minimize the risk that inferences are being drawn on MCMC that have not yet reached stationarity [46].
We used ARLEQUIN to estimate gene diversity, and estimate F ST and F IS values [47] based on the analysis of molecular variance (AMOVA) between individuals on the eastern (n = 18) and western (n = 12) banks of the reservoir. We chose this grouping because we believe that the lake is the greatest potential barrier to dispersal in the reservoir landscape. Individuals that did not clearly belong to either margin were excluded from this analysis (n = 2).
We also estimated the effective population size (N e ) using the program MIGRATE-N v2.1 [48]. We used Bayesian inference and maximum likelihood to estimate the parameter h, which was subsequently converted into coalescent effective population size using the formula h = 4N e m. Since there is no estimate of microsatellite mutation rate (m) for any of the species of tapirs, we considered a range of mutations rates from 1610 24 to 5610 24 . These mutations rates encompass estimates used in mammalian studies [49][50][51].
To infer h using maximum likelihood we ran 10 short chains, sampled each chain 5610 4 times and recorded 500 genealogies. We then ran three long chains, sampled each chain 1610 6 times, and discarded the first 1610 5 samples as burn-in. In the Bayesian inference analysis, we ran one long chain, which was sampled 5610 6 and recorded every 100 th genealogy. Searches were replicated 10 times. Search of genealogy space was improved via adaptive swapping among chains.
Estimates of relatedness between pairs of individuals can be highly variable, and different relatedness estimators will have distinct behaviors for any given dataset and particular relatedness category [52,53]. In order to investigate the properties of different estimators given the observed allele frequencies in our dataset, we used COANCESTRY V.1.0 [54] to simulate 100 pairs of individuals in each of the four major relatedness categories (PO: parentoffspring; FS: full-sibs; HS: half-sibs; and UN: unrelated). Because of the difficulty in separating PO and FS pairs, we grouped this category into a single first-order (FO) relationship category. We then used COLONY v2.0 [55], KINGROUP v2.08 [56] and IDENTIX v1.1 [57] to classify the simulated pairs into relatedness categories. We used the results from our simulated pairs as a training set in order to set expectations about classifying pairs of samples in our dataset.
In COLONY, due to the lack of data on sex and age of the individuals, the same individuals were set as possible candidates for siblings, bulls and cows. We used a prior probability of 0.5 that at least one true cow or bull was present in the dataset and accepted only relationships with a greater than 50% probability of belonging to a relatedness class. COLONY uses the information about the mating systems to perform the classification of pairs into relatedness classes. We estimated the likelihood of the data given three different breeding systems: (1) monogamy; (2) polygyny or polyandry; and (3) polygamy. We used Bayes factors [58] to identify which mating model had highest posterior support given the available data. Bayes factor values less than 22.0 (Log 10 scale) were considered an indication of a significantly better fit of the more complex model to the observed data [58].
The relatedness index (r) of Lynch and Ritland (r LR99 [59]) and Queller and Goodnight (r QG89 [60]) were estimated with KING-ROUP and IDENTIX. We used IDENTIX to estimate the 95% confidence interval for each pairwise r by bootstrapping. We used KINGROUP to test relationship hypotheses against more than one null hypothesis using likelihood ratio tests. In other words, we asked what is the likelihood odds ratio of a pair being PO given that FS, HS and UN are null hypotheses.
The results of the analyses for the simulated pairs were checked in R [61]. We calculated the proportion of unrelated pairs being classified as related (type I error), the number of related pairs being classified as unrelated (type II error), the proportion of first order relationships pairs (PO and FS) being misclassified as something else (misFO) and the proportion of UN and HS pairs being misclassified as first order relatives (misHS/UN). To assist the classification of some relationships in the tapir dataset, for each relatedness category we calculated the mean number of loci for which at least one allele was shared between a pair, and the mean number of alleles shared between individuals in a pair (see Table S1 for more details).
The classification of the simulated pairs suggested that COLONY performed poorly with our dataset (see Results section). Thus, we combined the results of KINGROUP, IDENTIX, and allele sharing patterns in order to produce a final classification for each pair in our dataset. Based on the simulation results, we took the following conservative approach to classify individual pairs into relatedness categories: (1) Based on the observed confidence interval surrounding an individual pair's r LR99 and r QG89 , we classified pairs as: (2) We accepted the likelihood ratio test with the lowest p-value among the following hypotheses comparisons: N Parent-offspring vs full-sibs and unrelated; N Full-sibs vs half-sibs and unrelated; N Half-sibs vs cousins and unrelated; N Cousins vs unrelated, and; N Unrelated vs parent-offspring, full sibs, half sibs and cousins.
(3) We classified pairs as FO if they shared $7 alleles at $0.8 of the loci.
The results from our simulations found a large overlap among confidence intervals between half-sib (HS) pairs and first-order relatives (FO; parent-offspring or full-sibs). Thus, we created a HSFO category, which groups individuals that are likely related but we are unsure to what degree. As there were more FO relatives in the interval from 0.25#r,0.5 than in the interval 0.125#r,0.25, we decided to created to subgroups of HSFO. This allowed us to create a finer gradient for classification of relatedness than a single HSFO category ranging from 0.125#r,0.5 would allow.
The relationships in the tapir dataset that could be assigned to a class were used to test for associations with geographic distances. Pairwise geographic distances were estimated as the Euclidian distance between individuals, and were based on geographic coordinates recorded for each sample (collected with a Garmin GPSMAP 60CSx). We tested if the geographic distances of related individuals (classified as first-order or half sibs) were smaller than the distance between unrelated ones using a Mann Whitney U test. We also performed a Mantel test to test for an association between observed r values and geographic distances. Both tests were carried out in R [61]. For the Mantel test, we used the ncf package [62].
Finally, Moran's I, an index of spatial autocorrelation [63], was estimated based on allele sharing at two scales: individual and landscape. Based on the mean home range described for T. terrestris [14][15][16], we assumed that samples distanced less than three km were deposited by individuals that likely have overlapping home ranges; we called this the individual scale. The landscape scale consisted of pair comparisons between samples separated by more than three km. We estimated 95% confidence intervals around estimates of Moran's I by bootstrapping individuals across both scales; this analysis was performed in SPAGeDi v1.3 [64].
Results
Eleven of the 14 microsatellite markers amplified in the blood samples. One of these markers was monomorphic (Tter18) and five were sensitive to the low quality of DNA from fecal samples (TtGT070, Tte01, Tter13, Tter14, Tter9), resulting in either nonamplification or difficult to interpret electropherograms. The remaining five loci were used in kinship and population analyses, and formed our genotyping panel. No null alleles, allelic dropout, genotyping errors, linkage disequilibrium or deviations from Hardy-Weinberg proportions were detected in any of the five loci of the genotyping panel.
In spite of being able to genotype only five loci, these were sufficiently informative to discriminate individuals: we estimated a P (ID)unbiased of 2.25610 26 and a P (ID)sib of 9.67610 23 , which are considered sufficiently stringent for conservation purposes (less than 0.01 [41]). Mean observed heterozygosity was 0.7721 and allelic diversity was 6.6 alleles/locus (Table 1).
Approximately 1000 fecal samples were found, but only 63 were considered sufficiently fresh to sample. Among the samples collected for laboratory analysis, 10 amplified across three loci, two amplified across four loci and 24 amplified across all five loci. Only 20 genotypes were unique across the five loci, and four genotypes were repeated once across samples. This suggests that feces for each of four individuals were collected twice. The samples with replicate genotypes were collected within a two-day interval and were separated by 630, 505, 400 and 150 m.
Samples collected in water were useful for genetic analysis: 58% (14) of the samples that amplified at five loci were collected in water. Rapid degradation of the extracted DNA was observed for all samples even when kept at 280uC, with amplification failing approximately 15 days after extraction.
We used the 32 unique genotypes (with a minimum of three loci) for population analysis. In STRUCTURE, the maximum marginal log-likelihood of the data given K (logL(D|K)) was found for K = 1. This suggests the presence of a single genetic unit in the study area, which is corroborated by the AMOVA results between opposite margins of the reservoir. Most of the genetic variance was contained within each margin rather than between margins. This resulted in low F ST (0.008) and F IS (20.011) values (both p.0.05). The gene diversity index for the population of the Balbina reservoir was 0.663460.4207.
As shown in Table 2, errors surrounding estimates of r, the relatedness index, in the programs KINGROUP and IDENTIX were similar, and the type I errors were relatively high (0.32). The fullpedigree likelihood method implemented in COLONY was extremely conservative, with a great number of first-order pairs being misclassified (misFO = 0.94). The small sample size, marker number and the lack of information about the individuals (sex and age) probably affected the classifications made by COLONY. Also, COLONY does not identify UN pairs, thus we considered all unclassified pairs as UN; this likely has inflated our COLONY estimates of misFO and type II error.
KINGROUP's pedigree hypothesis test was also conservative, but it had a smaller misFO error (0.38) and smaller errors overall than those observed with COLONY. Moreover, the confidence interval estimates, analyzed in IDENTIX, was the most reliable method: all errors were below 0.1 ( Table 2). The main reason for these smaller errors was the classification as ''inconclusive'' for confidence intervals that included values between 0 and 0.125. Furthermore, allele sharing patterns proved to be a good approach to classify FO pairs: 87% of simulated pairs that shared $7 alleles at $0.8 of the loci were FO pairs.
Due to potential problems associated to missing data in kinship analyses, we restricted our analysis to samples that amplified for four (n = 2) or five (n = 20) loci. The polygamy model had the most support from the data, with the other two models obtaining Bayes factor values less than 220.0 (Table 3). Based on the above-described criteria to classify relationships, we found 10 first order relationships (parent-offspring or full sibs pairs -FO), 10 half-sib relationships, 25 unrelated pairs, and 186 inconclusive pairs.
In five first order relationships and five half-sib pairs, the individuals were located on opposite sides of the reservoir. The distance between FO individuals ranged from 0.1 to 29.
Discussion
In this study, we present non-invasive genetic data on T. terrestris sampled from the islands formed by the Balbina hydroelectric reservoir in central Amazon. Our objective was to test the hypothesis: individuals that overlap in their home ranges are more likely to be related than individuals that do not overlap in their home ranges. Below we interpret our results in terms of what we were able to achieve logistically, and what our results mean for tapir biology and mammal social behavior.
Finding fecal samples suitable for genetic analysis in the Amazon rainforest is hindered by the dense forest and by the region's climate. The dense canopy results in relatively dark understory, while leaf litter act as camouflage, making it difficult to spot dung samples. Meanwhile, the warm, humid climate accelerates DNA degradation in feces [65,66]. The local environmental conditions notwithstanding, once samples were found that were considered sufficiently fresh for analyses, storage time became an important factor influencing amplification success. Several samples were collected while optimization of laboratory protocols was still underway, which resulted in longer storage time and lower genotyping success rate. Thus, we recommend that protocols be established and optimized prior to initiating fieldwork [34].
Despite these operational difficulties, we were able to obtain a reasonable number of samples for a large Neotropical mammal. In 55 field days we obtained reliable information on at least 20 individuals, as identified by genotype profiles. In comparison, studies based on animal capture, such as those of Tobler [15] and Medici [16], caught seven individuals over a six month period in the Peruvian Amazon and 35 individuals over the span of approximately nine years in the Atlantic Forest, respectively. It is also apparent that non-invasive samples can be used for recapture studies in the Amazon biome; samples with identical genotypes were collected within a short span of time and at close distances, which increases our confidence in a true recapture. Thus, the use of non-invasive sampling allow relatively rapid access to important biological information about elusive species [67], and provide encouragement for future research on elusive tropical species.
A large proportion of the samples were collected in water rather than on land. Contrary to general expectation, our results demonstrate that dung samples found in water bodies in tropical terrestrial ecosystems can yield high-quality genetic data. The lack of strong water currents in the Balbina reservoir and sampling during the dry season allowed the feces to remain intact for a greater period of time. This opens up the possibility of sample collection in study areas that encompass rivers without a strong current or in lakes. It should be noted that a sample in water that was carried by the wind/current could be differentiated from defecation at the collection sites by the appearance, quantity and grouping pattern of the pellets.
Our results indicate a single genetic unit in the landscape of the Balbina reservoir. While this result is important in regards to Note: motif type (Motif), allele size variation (Size), number of samples (N), allele richness (A), observed heterozygosity (Ho), expected heterozygosity (He), probability of identity with sample size correction (P (ID)unbiased ) and probability of identity between sibs (P (ID)sib ). doi:10.1371/journal.pone.0092507.t001 validating the assumptions of the relatedness analyses, it is also an interesting result in terms of landscape genetics. It suggests that the Uatumã river does not act as a barrier to gene flow in T. terrestris. However, the question of whether the increased width of the Uatumã river will have an effect is not likely to be answered any time soon. The time elapsed since the damming of the river (24 years) has not been sufficient relative to the species generation length to generate large effects on the spatial distribution of genetic variation (e.g. [68]): the life expectancy of tapirs in captivity is 30 years [69] with a generation time of approximately 11 years [16]. Thus, the low F IS and F ST values more likely reflect values of gene flow and genetic diversity that existed prior to the flooding of the dam. Hence, our results may be used as a benchmark in future studies aimed at assessing potential disturbances caused by the building of the dam. However, we argue that our data carries evidence that demonstrates that the width of the lake does not pose a complete barrier to tapir movement in this landscape. Assuming a typical life table for mammals [70], with high mortality rates among juveniles and adults at an advanced age, the proportion of individuals in the population as old or older than the dam is probably less than 5%. Moreover, the life expectancy of mammals in the wild is generally lower than that of those kept in captivity. In this context, we feel comfortable in concluding that some of the 10 related pairs of individuals located on opposite margins of the reservoir include individuals born after the flooding. This corroborates the idea that barriers to gene flow (natural or artificial) in lowland tapirs occur at larger spatial scales that straddle more salient barriers, such as the Amazon River [71]. It is not possible, however, to say whether tapirs are able to swim across the full extent of the reservoir, or if islands along the old riverbed ( Figure 1) are used as stepping-stones. Bayesian (6.5) and maximum likelihood (7.1) estimates of h (mutation scaled effective population size) were similar, as is expected when using non-informative priors. Given mutation rate assumptions, the effective population sizes may vary from 3250 to 17750. Generally the ratio of effective population size to census population size is thought to be around 1:10. Thus the number of individuals in the Balbina reservoir region is large, ranging between 177500 and 32500 depending on the assumed mutation rate. If one considers that the census population for tapirs in an Atlantic Forest fragment of 360 km 2 is ,300 individuals [16], we would need an area ,3 times larger than the REBIO Uatumã to harbor 30 thousand tapirs. This is not unreasonable given the continuity of the habitat in the region. Therefore, the estimated values are plausible if we consider that the geographic area occupied by the Balbina population is likely to be much larger than the sampled area.
The geographic extent of the population that includes Balbina is likely to be very large, considering that De Thoisy et al. [71,72] demonstrated minimal differentiation and concomitantly high gene flow for T. terrestris over an area at least 100 times larger than that sampled in this study. As a further comparison Drummond et al. [73] and Spong et al. [51] estimated even larger effective population sizes than the present study for the Beringian bison and the Tanzanian leopard, respectively. Moreover, the estimated mean observed heterozygosity and allelic diversity in the present study are among the highest reported for large mammals [31]. De Thoisy et al. [72] found similar values and Gonçalves da Silva et al. [74] found slightly lower values for tapirs in captivity in Argentina (Table 4). For T. bairdii [31], the reported observed heterozygosity and allelic diversity values were considerably lower (Table 4), as expected for endangered populations [75].
The mating system analysis we carried in COLONY suggests that lowland tapirs are polygamous (Table 3). However, C. R. Foerster, after 10 years of study, suggested polygyny for Baird's tapirs (T. bairdii) [16]. With either mating strategy there is generally a high degree of home range overlap among adults, as found for T. terrestris [14][15][16], but the observation of home range overlap between one female with two males and of one male with two females [15,16]-plus observations made by camera trapping of females being accompanied for different males (E. P. Medici, personal communication)-suggests a polygamous system (i.e., both male and female are promiscuous [76]). Therefore, the evidence found in the present study, together with ecological Table 3. Maximum log-likelihood values for mating system models suggested for T. terrestris, with associated log Bayes factor and posterior probabilities. observations, support the hypothesis of a polygamous mating system for T. terrestris.
In general, polygamous ungulates that display some kind of territoriality are largely folivores observed in open-habitat areas, such as grasslands [77]. While our result would appear to contradict this observation, we do not believe it is entirely inconsistent with it. Instead, we propose that, if lowland tapirs are indeed promiscuous, the observation of a behavior typical of grassland habitat is a case of Krumbiegel's rule, which states that behavioral patterns evolved in one type of habitat will persist long after that habitat changes [78]. We know that tapirs in Asia evolved largely in open grasslands, and are now one of the few remaining taxa from a large megafauna that has not gone extinct with the rise of tropical jungles [79]. It is possible that similar scenario occurred in South and Central America [80].
Regarding the relatedness analysis, we classified pairs into a specific relatedness category based on the estimated r-values, confidence intervals surrounding each r estimate, pedigree hypotheses tests, mean number of shared alleles, and mean number of loci that share at least one allele (Table S1). As can be seen from our simulated data, the combined results increased our confidence in our classification, while accounting for the uncertainty resulting from the number of successfully assayed markers. Although we are confident that our classification is reliable, it is important to note that our sample sizes for the purpose of statistical analyses were small, as is the case for many studies with large mammals. Nevertheless, we consider the results informative and valuable, being the first data obtained via noninvasive sampling to identify individuals of an elusive mammal in the Amazon.
In the case where at least one of the sexes is philopatric there is an expectation of increased Moran's I at the local/social scale relative to larger, landscape scales (e.g. [81]). Our data show no difference between Moran's I at the individual scale and at the landscape scale. Pairs of related individuals did not occur geographically closer than pairs of unrelated individuals. We thus have no evidence to support the hypothesis that recognition between related individuals leads to a greater tolerance among tapirs, that tapirs prefer to be close to relatives or have philopatric behavior. Therefore, our data do not corroborate the formation of family units in T. terrestris.
The fact that kinship does not seem to influence the spatial pattern of individuals is unusual in mammals [82]. We are aware of only one example of this in mammals, the racoon (Procyon lotor) [83]. Interestingly, racoons and tapirs seem to have a lot in common. Much like tapirs, racoons are described as largely solitary wide-spread species, that occupy various types of habitats with varying densities. Similarly to our study, Hirsh et al. [83] found no pattern in spatial proximity between related and nonrelated individuals. Instead, they found that other factors, such as availability of winter dens and concentration of food resources played a much more significant role in driving associations between individuals. Thus, as in racoons, recognition between individuals may occur independently of kinship and other factors may influence the formation of social groups, such as environmental factors. Barongi [69] and Foerster & Vaughan [84] attributed tapir home range overlap to the fruiting season, in which the greater availability of food resources promotes group formation. It is also possible that the formation of the Balbina reservoir has disrupted territories and family units, and due to the long-lived nature of the species these characteristics have not yet returned to equilibrium.
The presence of unrelated pairs at the individual scale, coupled with the absence of correlation between relatedness indices and geographic distances suggests a high variance in tapir movement, which may represent dispersal events. Note that dispersal distance is defined as the distance between natal and breeding sites [6]. We cannot, however, distinguish between natal dispersal and breeding dispersal as we have no data for the ages of the animals studied. Adult individuals have been seen leaving their habitual home ranges by as much as 10 km to visit mineral licks [15]. So it is possible that adults make similar excursions in the search of the opposite sex.
Nevertheless, it is generally observed in mammals that individuals disperse from their natal site at the onset of sexual maturity, or soon after, to establish their own home ranges [5]. Foerster & Vaughan [84] observed the birth of four tapirs that dispersed from their natal area after three to four years. During the period of residence of the juveniles, their parents maintained an exclusive area without other adults. In this case, the establishment of territories would be associated with a period of parental carebut it is not clear, however, whether or not tapirs display territorial behavior. Thus, the observed high variance in distances between related individuals could reflect different stages of dispersal (e.g., before and after natal dispersal), as well as breeding dispersal or movement behavior associated with the search for resources.
The present study offers novel information on the behavioral ecology of T. terrestris and the use of non-invasive sampling for individual discrimination in tropical forests. Based on the present findings, we suggest a polygamous mating system and dispersal from the natal home range for T. terrestris. Apparently, tolerance between individuals is not influenced by kinship, as the proportion of related pairs at the individual scale was not different from the proportion observed at the landscape scale. This is unusual in mammals, but has been described elsewhere. In respect to the methods, the non-invasive sampling allowed rapid access to genetic data from an elusive species, even in the Amazon biome with its warm and humid forests. Therefore, the methods applied here should work for other medium-sized and large mammals in similar environments. However, researchers working in this perspective should be rigorous not only in the laboratory procedures, but also in testing kinship category assignments and selecting the most appropriate analytical methods for their data.
Supporting Information
Table S1 Information used in the classification of relationships. Note: individuals in the pair (Ind1, Ind2); mean number of loci for which at least one allele was shared between a pair (Mean share); mean number of alleles shared between individuals in a pair (Allele count); relatedness index (r) of Lynch and Ritland (r LR99 ) and Queller and Goodnight (r QG89 ); 95% confidence interval for each pairwise r of Lynch and Ritland (CI_LR99) and Queller and Goodnight (CI_QG89); pedigree hypotheses test with the primary hypotheses being: parentoffspring (PO), full sibs (FS), half sibs (HS), cousins (C) and unrelated (UN); final classification (Conclusion) of a pair as inconclusive (IN) or into a relationship class (FO-First Order Relatives, HS-Half Sibs or U-Unrelated); geographic distance between the individuals in meters (Distance); and additional information used to assist the classification (Additional information). In the pedigree hypotheses test the tests with ''*'' were significant at the 0.05 level, ''**'' at 0,01 level and ''***'' at the 0.001 level. The values of distance with ''{'' symbol means that the individuals were located in opposite margins of the reservoir. The probabilities mentioned in the additional information were based on the errors measured from the results of the simulation. (XLS) | 2017-05-20T11:47:43.021Z | 2014-03-26T00:00:00.000 | {
"year": 2014,
"sha1": "78f96b0c30d0b61594a90a832a47b33f3d5f2fb6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0092507&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78f96b0c30d0b61594a90a832a47b33f3d5f2fb6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
261556526 | pes2o/s2orc | v3-fos-license | Spent substrate from mushroom cultivation: exploitation potential toward various applications and value-added products
ABSTRACT Spent mushroom substrate (SMS) is the residual biomass generated after harvesting the fruitbodies of edible/medicinal fungi. Disposal of SMS, the main by-product of the mushroom cultivation process, often leads to serious environmental problems and is financially demanding. Efficient recycling and valorization of SMS are crucial for the sustainable development of the mushroom industry in the frame of the circular economy principles. The physical properties and chemical composition of SMS are a solid fundament for developing several applications, and recent literature shows an increasing research interest in exploiting that inherent potential. This review provides a thorough outlook on SMS exploitation possibilities and discusses critically recent findings related to specific applications in plant and mushroom cultivation, animal husbandry, and recovery of enzymes and bioactive compounds.
Introduction
Mushrooms are sporocarps, i.e. visible sporebearing structures, fulfilling an essential function in the sexual reproductive stage of the life cycle of many fungi [1].Many mushrooms are considered edible because they do not contain toxins and are low in antinutrients, while they are rich in proteins, dietary fiber, vitamins, minerals, and other nutritional components [2].The specific composition of mushrooms depends on the species.Mushrooms can have up to 30% (w/w) crude protein, while the content of crude fiber, fat, and carbohydrates in some species can be up to 28, 8, and 95% (w/w), respectively [3].Edible mushrooms are climate-smart, protein-rich food sources that can partially substitute meat, whose production has a significant climate impact.Furthermore, due to their high content in various health-promoting ingredients, e.g.β-glucans, peptides, proteins, and phenolic compounds [4], they possess immunomodulatory, antibacterial, cytostatic, antioxidant, and other properties, and for this reason, the term 'medicinal mushrooms' is also used when referring to them [3].
The benefits of mushroom consumption on human health and wellbeing are well recognized.As a result, pertinent demand has considerably increased in all continents, and edible mushroom commercialization has nowadays become a worldwide business [5].Hence, mushroom production has increased more than 30 times since 1978, and it is a fast-expanding industrial activity.Although most of the production is concentrated in Asia, with China as the top producer with around 90% of the global market, mushroom production in the European Union, led by the Netherlands and Poland, and in the Americas has experiencing a significant increase during the last decades [6].The commercial cultivation of mushrooms includes more than fifty species.The top four belong to the genera Lentinula (L.edodes, popularly known as 'shiitake'), Pleurotus ('oyster mushrooms'), Auricularia ('wood ear mushrooms'), and Agaricus ('button mushrooms'), which together correspond to 74% of the world market [7].
Mushrooms are cultivated on substrates based on plant biomass, e.g.crop residues and underutilized wood leftovers, which are continuously increasing because of the expansion of agricultural production driven by global population growth.Currently, disposal by burning is one of the chief methods for coping with the accumulation of plant residues.However, this widespread practice is against sustainability principles, contributes substantially to air pollution [8], and results in a considerable waste of biomass resources that are highly valuable for generating materials, fuels, and chemicals of high economic and social value [9].
The valorization of crop residues within new recycling models, i.e. substrates for mushroom cultivation, is crucial for the sustainability of agricultural production.Therefore, besides leading to the generation of food, mushroom cultivation is an example of holistic exploitation of residual lignocellulosic biomass through an efficient continuous-flow process carried out indoors, requiring remarkably lower land areas than most other crops [10].Furthermore, unlike conventional agriculture, which is season-dependent, mushroom production can be performed throughout the year independently of climatic conditions.
The mushroom cultivation process aims at producing fruitbodies of edible or/and medicinal fungi.At the end of the process, the fruitbodies are harvested, and an exhausted residual substrate is generated.That nutrient-depleted biomass waste, known as spent mushroom substrate (SMS), is the main by-product of the mushroom industry.Depending on the nature of the materials used for formulating the substrate, the type of production system, and the cultivated species, three to five kg of SMS is generated per kg of fresh mushrooms [11].In total, ca.64 million tons of SMS were generated worldwide by the mushroom industry in 2018, and this figure could escalate to above 100 million tons by 2026 [12].
The large quantities of generated SMS, currently regarded as a waste product with little inherent value, present a major challenge to mushroom producers due to the need to find suitable disposal sites and to cope with the high cost incurred for the transportation of a bulky material with high moisture content and low density; drying of fresh SMS is a hardly feasible energy-intensive activity.Moreover, SMS handling/disposal is of primary environmental concern due to the emission of greenhouse gases from spontaneous anaerobic digestion (often occurring in the piles formed during provisional storage), foul odors, and leachate drainage to water receptors causing pollution and eutrophication [13].Landfilling has traditionally been the chief disposal strategy for SMS, but it is now banned in the European Union by a Council Directive on landfilling of biodegradable wastes [14].The current linear 'take, make, dispose of' approach, where SMS is regarded as waste, threatens the future development of the mushroomgrowing sector.Valorization of SMS is crucial for developing a sustainable mushroom industry in the frame of a circular-economy model.It is essential to investigate SMS characteristics to identify appropriate valorization alternatives.
SMS composition and properties are mainly associated with the type of raw materials and supplements used to prepare the initial mushroom substrate.For the cultivation of edible mushrooms of the genera Lentinula, Pleurotus, and Auricularia, which represent 60% of the global production, various lignocellulosic by-products, e.g.forest, agricultural and agro-industrial residues, are used as substrate base.Chicken manure is also a major component for other mushroom species requiring composted substrates (e.g.those of the genus Agaricus).Starch-containing and nitrogen-rich ingredients (e.g.cereal bran or legumes' flour) and mineral salts are used as supplements.During cultivation, substrate components are enzymatically degraded, and the resulting nutrients (together with others existing in the substrate) are used for fungal growth and mushroom production.Mass losses in the ranges of 26-46%, 57-77%, and 61-75% of the initial cellulose, hemicelluloses, and lignin, respectively, have been reported for Pleurotus ostreatus, Pleurotus pulmonarius, and L. edodes [15][16][17].In the end, SMS composition strongly depends on the nature of the initial substrate and the cultivated species [18].Therefore, SMS primarily consists of plant cell-wall components (lignin, hemicelluloses, cellulose) and residual fungal mycelium, as well as non-cell-wall carbohydrates, proteins, and minerals.
There are different valorization routes for SMS, and some of them have already been discussed in previously published reviews [19,20].The current review is aimed at providing, in brief, an updated overview of potential SMS applications and products related to (i) new cycles of mushroom cultivation, (ii) agriculture and animal husbandry, and (iii) the production of enzymes and bioactive compounds.SMS valorization as part of cascade-use systems for plant biomass processing is also discussed.Bioremediation and energy-related uses are not included because they were exhaustively presented in a recent review [20].This review is based on an exhaustive Scopus search performed in July 2022.The search terms used were Spent Mushroom Substrate OR Spent Mushroom Compost AND relevant keywords of each specific application.The topic presented in this review is of relevance to the UN Sustainable Development Goals 2 (Zero hunger), 3 (Good health and wellbeing), 9 (industry, innovation, and infrastructure), 13 (climate action), and 15 (life on land), considering that the discussed valorization alternatives have the potential for providing innovative solutions to increase food security, and contributing to the production of healthy food and reduction of the use of harmful chemicals in farmlands.
Reusing spent mushroom substrate for new cultivation of mushrooms
The spent mushroom substrate can be used in substrate formulation for new cycles of mushroom cultivation provided that suitable lignocellulosic materials are employed, the fungal strain is appropriately selected, and the environmental conditions are optimally regulated.Supplemented cereal straw and wood sawdust are the most common substrates in commercial mushroom cultivation due to their composition, availability, and relatively low cost.Agricultural or agro-industrial by-products with low or no economic value, such as sugarcane bagasse, coffee husks, and olive mill and winery wastes, are exploited in mushroom production, contributing to both the improvement of cultivation performance and the enhancement of mushrooms nutritional value [21][22][23][24].Using cheap lignocellulosic residues positively affects the cost of substrate, providing an environmentally friendly solution for their effective management and valorization.
Cultivated mushrooms are often groupedbased on their ecological adaptation/requirements -as either primary decomposers (e.g.P. ostreatus, L. edodes) that are produced directly on previously untreated (or partly treated/composted) lignocellulosic substrates, or as secondary decomposers (e.g.Agaricus bisporus, Volvariella volvacea).Secondary decomposers are cultivated on composted substrates prepared from various agricultural residues, including manures.The proposal to reuse SMS in new crops was originally based on the sequential use of the substrate, first by primary decomposers and then by secondary decomposers, and on the enzymes involved in each process since these vary among species of different ecological groups [20,25].However, in most studies, supplementation is required to adjust the nutrient content when SMS is used as the sole (or the main) substrate ingredient in mushroom cultivation.Hence, this material could be exploited to cultivate a broader range of mushroom species (not only secondary decomposers).Furthermore, many of the most successful SMS applications have been reported when the same species (as the one originally cultivated on the spent substrate) was also used in the new crop, e.g.P. ostreatus, Auricularia polytricha, A. bisporus [26][27][28][29].The initial substrate composition, the cultivation cycle duration, and the number of flushes harvested are important in optimizing SMS for reuse in mushroom cultivation.The type of substrate pretreatment adopted (e.g.chopping, composting) prior to cultivation, the incorporation rate of SMS to the main substrate of the new crop, further supplementation with nutrients, and the selection of the species/strain to be used are also important parameters, which have to be considered when such types of applications are developed.Factors affecting the success of new mushroom crops based on SMS recycling are summarized in Figure 1.
Reported results from using SMS in new mushroom crops demonstrate a wide variation as regards the effect of the recycled material on the final yield (Table 1).In several cases, similar [27,29,40,44,50] or even higher [26,28,45] values of biological efficiencies (ΒΕ; percentage ratio of fresh mushroom weight over the dry weight of the respective substrate) were recorded in substrates containing SMS deriving from the cultivation of either the same or other mushroom species when compared to conventional (used for the first time) substrates.However, in some studies, it was also reported that the incorporation of high amounts of SMS in the new cultivation medium or the casing layer negatively affected the mushrooms' final yield [30,36,37,42], which could be mainly attributed to the low content of nutrients or to inadequate supplementation of the spent substrates.
Most of the investigations related to the reuse of SMS in new cultivation cycles focus on species of the genus Pleurotus.Indicatively, out of the 27 selected studies shown in Table 1, 16 deal with the reuse of Pleurotus SMS, and 14 with the cultivation of oyster mushrooms in SMS-containing substrates.This may be explained by the relative ease of oyster mushroom cultivation, their rather short production cycle, and the wide range of suitable substrates available.Among the most relevant examples are those pertaining to the use of Pleurotus eous SMS mixed with wheat straw for the cultivation of other Pleurotus species (BE up to 113% [45]) and the use of supplemented SMS from Pleurotus sajor-caju for the production of the same mushroom (BE up to 125% [47]).Another successful example is the use of supplemented SMS from P. ostreatus as substrate for growing P. ostreatus and P. pulmonarius mushrooms, which resulted in the highest BE values reported in pertinent literature, namely 185% for P. ostreatus and 208% for P. pulmonarius [26].[36,46,50].Shiitake (L.edodes), a widely cultivated edible mushroom, is produced mainly on hardwood sawdust substrates.SMS deriving from L. edodes seems to be suitable for the cultivation of various oyster mushroom species, including P. ostreatus, P. sajor-caju and Pleurotus cornucopiae (BE: 61-79%) as well as for Flammulina velutipes (BE 88%) following rich supplementation with cereal derivatives [50,52].On the other hand, using SMS from other mushrooms (e.g. A. bisporus and F. velutipes) to cultivate shiitake requires mixing with untreated sawdust at a rate of at least 40% [48,49].
The reuse of SMS deriving from the cultivation of Agaricus species to establish new mushroom crops is quite demanding due to the nature of the final material and the processes leading to its production.Therefore, most attempts focus either on suitably upgrading it with soybean meal and Target® (a commercial delayed release nutrient suitable for mushroom cultivation) before reusing it as an ingredient for A. bisporus cultivation (ΒΕ: 97-144%) [27].Attempts focusing on exploiting SMS as casing material alone or mixed with farm yard manure or sphagnum peat in the cultivation of Pleurotus eryngii and A. bisporus, respectively, have also been reported [38,42].
Concerning the use of SMS of less widespread species, noteworthy is the case of F. velutipes SMS -which when combined with oak sawdust and rice bran -exhibited satisfactory yields of L. edodes mushrooms (BE: 60-84%) [48].In addition, the sawdust-based SMS from Ph. nameko, H. marmoreus and Hericium erinaceus were used to produce P. ostreatus mushrooms (BE: 66-73%) [30].
The reuse of SMS in new mushroom crops seems to have considerable potential since it can support high yields and is both financially feasible and environmentally sustainable.The elements and organic compounds existing in SMS constitute valuable sources of energy and nutrients, which can partially or entirely cover the needs of additional cultivation cycle(s) after suitable treatment or supplementation.
Spent mushroom substrate as feed
It is estimated that agricultural production should be increased by 70-100% to meet the food demand of the increasing global population, which is predicted to grow to 9.7 billion by 2050 [53].Soybeans and maize are the most common energy and protein sources used by livestock farmers to generate meat, eggs, and dairy products, which, in turn, are the main protein sources in human diet [54].The need for animal feed production is predicted to increase significantly, and the feed industry must look for additional/alternative means to cover the respective demand.Exploiting suitable bioresources (e.g.SMS) could contribute toward this direction by readily providing material to be used as feed supplement.
The main raw materials used in mushroom cultivation are rich in cellulose, hemicelluloses and lignin, while their protein content is generally low [23].During solid-state fermentation by mushroom-forming fungi, the substrate polymers are enzymatically degraded, and the digestibility of plant residues is considerably improved.Concomitantly, the growth of mycelial biomass upgrades the substrate by increasing its content in proteins and bioactive compounds, e.g.polysaccharides and ergosterol [55][56][57][58].Indicatively, the growth of P. ostreatus mushrooms on faba bean hulls increased their protein content from 208 g kg −1 (on a dry weight (DW) basis) in the initial substrate to 347 g kg −1 in the SMS [59].Furthermore, P. ostreatus growth enriched the material in 14 out of 16 analyzed amino acids, and significantly reduced the content of antinutritional compounds, such as tannin, vicine, and convicine.SMS from other mushroom species is also rich in compounds of interest for enhancing the quality of feed rations.L. edodes SMS is rich in the provitamin D 2 ergosterol (151.6 mg ergosterol equivalents/100 g) [58], while SMS from several other species contains high amounts of polysaccharides, including β-glucans [60].Consequently, the high nutritional value of SMS is the main factor for its inclusion in the diets of poultry, ruminants, and monogastric animals, and, recently, in fish and edible insects.A summary of relevant reports on the reuse of SMS as animal feed is presented in Table 2.In addition, the main outcome of each study is briefly presented and further discussed below.
Spent mushroom substrate in the diet of poultry
The incorporation of SMS derived from the cultivation of P. eryngii, P. ostreatus and H. marmoreus (fermented or not by Bacillus subtilis) at a ratio from 5 to 15% (w/w) in a poultry diet increased the feed intake without having adverse effects on the egg production and the mass of useful meat [67,74,75,79].On the other hand, incorporation of Agaricus blazei SMS at rates exceeding 0.4% (w/w) caused a gradual reduction in the weight gain of broiler chickens, while inclusion ratios of only 0.2% exhibited the highest value of weight gain and feed intake, as well as the best feed conversion [81].Similarly, low inclusion ratios of P. sajor-caju SMS (up to 0.67%) improved the weight gain of broiler chicks in the first 21 days [80].
Spent mushroom substrate in the diet of monogastric animals
SMS inclusion in the diet of monogastric animals have been tested with both pigs and model animals (mice, rats).The addition of sawdust-based SMS from Grifola frondosa (25% w/w) in rats' diet did not affect the weight gain, the feed efficiency, or the biochemical parameters, while fecal weight and protein content were found to be higher [76].In addition, the -orally administered -hot water extract of SMS from Ganoderma lucidum exhibited enhanced murine immune function in mice [65].Furthermore, the use of low amounts of SMS from P. ostreatus (up to 3.5%, w/w), Cordyceps militaris (0.2%, w/w), and L. edodes (3%, w/w) in pigs' diet positively affected the feed intake and conversion, as well as the final weight and quality of meat deriving from trials [55,64,82].C. militaris SMS resulted in increased immunoglobulin A and G, and glutathione peroxidase activities, while leukocytes, cholesterol and malondialdehyde contents were decreased [55].Similarly, beneficial effects on the intestinal mucosal barrier, immunity, and the diversity and abundance of the bacteria in the colon and cecum were observed for weaned piglets when fed with L. edodes SMS [64].It is noteworthy that SMC seems to be useful as a 'behavior regulator' in pigs; by having access to mushroom compost through a metal grid, pigs demonstrated significantly reduced negative behavior (e.g.such as nosing, tail biting and chewing) against penmates, as well as improved overall welfare in comparison to pigs with no access to SMC [84].
Spent mushroom substrate in the diet of ruminants
The use of SMS as animal feed has been investigated to a larger extent for ruminants than for monogastric animals (i.e. 13 vs. 5 publications appeared, respectively, when a Scopus search was performed by using the keywords 'Spent Mushroom substrate/compost' AND 'feed;' July 2022).Incorporating SMS from various mushroom species at a rate of up to 30% (w/w) in the daily intake of ruminants revealed its potential as a supplement to conventional feeds without affecting several relevant parameters (Table 2).Specifically, feeding sheep for three weeks with a diet including up to 20% (w/w) of A. bisporus SMS did not affect the nutrient intake, digestibility, and nitrogen balance [83].Similarly, A. bisporus SMS fed at a rate of 15% (w/w) for 170 days did not cause any effect on the carcass and internal organs of Holsteins male calves [73].Plain P. ostreatus SMS in ratios higher than 15% resulted in adverse effects in sheep slaughter weight, empty body weight, and hot and cold carcass weight [69].In contrast, when rice straw was fermented for eight weeks with P. sajor-caju SMS before being fed to alpine dairy goats, it increased the rumen degradable fibers fraction and improved dry matter intake and milk yield [61].
Feeding male sika deer for 60 days with P. ostreatus SMS (10%, w/w) resulted in a reduction of the intake of organic matter and improved digestibility of crude fibers, while no effect on either the apparent nutrient digestibility, feed intake, velvet antler production, or biochemical indexes was observed when F. velutipes SMS (10%, w/w) was fed to the same animals [62].
When hot water extracts from G. lucidum and Ganoderma chalceum (syn.G. balabacense) SMS were supplemented to dairy cow feed, immunity and antioxidant capacity were increased, and milk quality was improved [71,72].By using SMS extracts, the addition of large amounts of fibrous components from the untreated SMS could be avoided, but further studies are required to investigate their impact on animal health and the optimum incorporation rate, which depends mainly on the substrate origin.
Agaricus and Pleurotus species are usually cultivated in straw-based substrates, while L. edodes, G. lucidum, Gr. frondosa and He.erinaceus are cultivated in wood-based substrates.The incorporation rate of such substrates in animal feed is low, and further treatment is necessary to improve their nutritional characteristics.In recent years, microbial fermentation with probiotic microorganisms has been adopted as a cheap, fast, and efficient method to reduce fibrous ingredients and upgrade the protein content of SMS, including those deriving from sawdust-based media.Moreover, probiotic microorganisms relieve animals weaning stress, regulate intestinal microbiota, and reduce diarrhea incidents [85,86].
Due to its high moisture content, SMS tends to decompose rapidly; hence, it needs to be processed quickly.This could be achieved by ensiling, for instance, by lactic acid fermentation under anaerobic conditions [87].Lactic acid bacteria produce desirable metabolites, and suppress the growth of clostridia and other deleterious microbial populations [88].Although ensiling processes may be initiated naturally by the epiphytic microorganisms existing in the initial material, they can be assisted by inoculated bacteria.Inoculation of SMS with Lactobacillus, Bacillus, or Enterobacter spp.ensures rapid acidification, and increases dry matter degradability and crude protein content [64,66,70,75,77,78].
An indicative example is the use of a sawdustbased P. eryngii SMS incorporated at a high rate (45%, w/w) into silage with various agricultural by-products, and fermented for 22 days [68].Using the resulting product for feeding sheep resulted in similar energy value, lower digestion of fibers, and higher protein metabolism and utilization compared to the outcome achieved with a rye straw-based control diet.In addition, the sawdust-based SMS from the same mushroom species, when fermented with Enterobacter and Bacillus spp., significantly improved growth performance and carcass traits in Hanwoo steers compared to rice straw feed administered for 12.6 months during the growing and fattening periods [77].Similarly, P. ostreatus SMS fermented with Lactobacillus plantarum and Pediococcus acidilactici could replace up to 50% of the conventional feed provided to Hanwoo steers and postweaning calves [66,70,78].Such an SMS-based feed improved the growth performance of the tested animals or enhanced the daily gain caused by increased voluntary feed intake.Finally, feeding Liuyang black goats with P. ostreatus SMS cofermented with whole rice plants improved meat quality and had no adverse effects on the slaughter performance [63].
Spent mushroom substrate in the diet of fish and edible insects
Using SMS in pisciculture is also of substantial interest.SMS from P. ostreatus, Pleurotus cystidiosus, and G. lucidum seems to support the growth of catfish, and significantly increase its survival rate and digesting ability compared to commercial feeds [89].The addition of C. militaris SMS up to 40 g kg −1 in the diet of Nile tilapia (Oreochromis niloticus) improved growth performance, skin mucus lysozyme, and peroxidase activities, as well as serum immune parameters [90].The combination of SMS with L. plantarum further improved those parameters.Moreover, enrichment of Nile tilapia diet with A. blazei SMC (1%, w/w) provided significant protection against infections from Streptococcus agalactiae [91].Including an extract from the SMS of Schizophyllum commune, a popular mushroom in Thailand, in the feed of Nile tilapia resulted at enhancing their immune defense [92].
In the frame of the need to reduce dependence on feeds deriving from plants, the use of insects seems to be a promising alternative due to their high content in crude protein (up to 76%) and fat (up to 59%), energy (20-30 MJ/kg DM), as well as to their short life-cycle and the low-cost growth prerequisites [93][94][95].The potential of six SMS derived from Auricularia cornea, Auricularia heimuer, P. eryngii, P. ostreatus, Pleurotus citrinopileatus, and L. edodes was recently evaluated to rear black soldier fly (Hermetia illucens) and Tenebrio molitor larvae.L. edodes SMS was shown to be the most suitable to replace the insects' conventional feed [96,97].Furthermore, when Protaetia brevitarsis larvae were grown on L. edodes or Auricularia auricula-judae SMS, a nutrient-rich organic fertilizer with low phytotoxicity and high humic acid content was produced [98].However, no studies exist on the production of insects naturally feeding on mushroom substrates.That could be a promising alternative considering the ease of insects' growth and the low demand in terms of material resources.
In conclusion, using SMS in animal nutrition can significantly contribute to the enrichment of feed, particularly regarding proteins and bioactive compounds.However, incorporating SMS into the daily feed-schedule is a complex process.The mushroom species, the initial substrate composition, the animal species, and the digestibility and voluntary intake of the final product are factors that must be carefully considered to calculate the final integration rate.The high NDF and ADF content (especially in sawdust-based substrates) is probably the main limiting factor in SMS exploitation as feed.Adopting appropriate treatment approaches, including lactic acid fermentation and the use of SMS extracts, could enhance the nutritional and acceptance characteristics, thus facilitating the incorporation of SMS in the diet of productive animals.Particularities related to the composition of each type of SMS and the individual needs of the animal species require careful experimentation on a case-by-case basis to ascertain the safe and efficient use of SMS.
Use of spent mushroom substrate in agriculture
The global demand for food and feed has led to intensification of agricultural production and the widespread use of fertilizers and pesticides.World consumption of the three main fertilizer elements (Ν, P, K) was estimated at 201.7 million tons in 2020 [99], and nearly 3 billion kg of pesticides are used yearly [100].Although using fertilizers and pesticides has increased food availability, their extensive application negatively impacts the environment and human health.Hence, adopting sustainable agronomic practices, including the development of novel environment-friendly and cost-effective biofertilizers and biopesticides, is of high priority.In line with that approach, the SMS's physical properties, its high content of bioactive compounds, and readily available macro-and trace elements make it a promising candidate for several agricultural applications, the most important of which are presented in the following paragraphs.
Use of spent mushroom substrate as biofertilizer and soil conditioner
Organic soil amendments, commonly used in agriculture, exert positive effects on crop productivity and soil health by affecting physicochemical and biological properties of soil [101][102][103].Among the most widespread materials used as organic soil amendments are those originating from municipal wastes (food and gardening wastes, sewage sludge), animal husbandry (manure), crop production (stems, leaves and branches), and agroindustrial activities (fruit pulp and oil extraction by-products).However, those materials can contain hazardous compounds or plant pathogens, which are detrimental to soils and crops.
Since SMS is rich in nutrients and has low (and, most often, no) content in xenobiotic compounds and heavy metals, it can be used as a soil amendment either directly or after a composting treatment.SMS properties vary depending on the raw materials included in the initial substrate, the mushroom species, and the cultivation technology.Accordingly, a wide range of effects is noted on soil characteristics, crop growth and yield when SMS is used as a soil conditioner or fertilizer [104,105].However, it is worth mentioning that the mushroom species and the SMS composition are often not specified in pertinent publications, making it difficult to draw sound conclusions about its exploitation prospects.
A summary of relevant reports on incorporating SMS into soils is presented in Table 3, which includes information on the SMS origin, type, and incorporation rate, and the main effects of SMS addition on the soil and plants under study.The presented results indicated improvements in soil structure and fertility, which led to increased crop production or contributed to the restoration of barren lands and degraded soils.
By applying SMS of unknown origin (20 Mg ha −1 ) and chicken manure (10 Mg ha −1 ) in a sandy soil every one-two years for 20 years, Lipiec et al. [109] reported an increase in soil organic matter content by 102-201%.The experiment also resulted in a long-term increase in field water capacity caused by the augmentation of residual pores by up to 251%.Similarly, fresh or composted SMS applied annually for four years at two different rates (8 and 25 Mg ha −1 ) to a semiarid vineyard soil increased the content of inorganic N in the soil surface (0-5 cm) [120].However, only the highest SMS addition rate improved soil organic carbon, total nitrogen, and labile organic forms at 0-5 and 5-15 cm soil depths.
In other large-scale applications, incorporating A. bisporus SMS into the soil (100 kg ha −1 ) increased oxidizable organic carbon, organic N, and available P content [119].The values obtained for using A. bisporus SMS alone were higher than those resulting from incorporating a mixture of A. bisporus and P. ostreatus SMS (1:1, v/v).Both schemes of SMS addition resulted in increased phosphatase activity compared to unamended soil, while no alterations in the soil salinity or pH value were observed, and N mineralization was low.The same treatments also had positive effects when examined in a calcareous clayey-loam soil used for lettuce production [105].In that study, application of SMS resulted in higher values of oxidizable organic carbon, organic N, extractable K, available P, and cation exchange capacity (especially when using A. bisporus SMS) than in soils receiving NPK fertilization, while lettuce yields were similar.
Ngan and Riddech (2021) reported the application of a mixture of SMS with plant growth-promoting bacteria (Bacillus amyloliquefaciens) in the cultivation of Hibiscus sabdariffa [110].The study revealed an improvement in soil properties exceeding the effect exerted by NPK fertilization.Unfortunately, the lack of information on the SMS origin makes it difficult to compare the results with those of other relevant studies.
Testing fresh or sterilized F. velutipes SMS in cucumber cultivation resulted in a significant increase in total organic carbon, dissolved organic carbon, and microbial biomass carbon compared to NPK use and to no fertilization [111].The study revealed higher levels of microbial diversity and enzyme activities for the fresh SMS-amended soil compared to soil treated with mineral fertilizer.Correspondingly, A. bisporus SMS amendment in soils increased bacteria and fungi co-occurrence, and the plant yield was positively affected by the relative abundance of microbial hubs [112].Similarly, the application of Agaricus subrufescens and L. edodes SMS enhanced soil microbial population, and resulted in a remarkable increase in lettuce plants' dry weight compared with the results achieved with no fertilization or NPK treatments [104].For several other crops, SMS application to the soil led to higher yields than those obtained by mineral fertilization [105,116,121].
Soil biological properties play a critical role in the maintenance of ecosystem functions, crops productivity enhancement, and at mitigating the adverse effects of pollutants.The beneficial effect on soil biological properties, including the structure of microbial communities and associated enzyme activities, is an attractive aspect of using SMS as an amendment.The main disadvantage of SMS is the state of stability/maturity which -if imperfect/immature -could hamper its wide agronomic use.However, this issue could be overcome by composting it, alone or mixed with other crop residues, under controlled conditions [115].
Several studies have revealed that using SMS as an ingredient in the composting process promotes the degradation of organic matter in mixtures with waste sludge, pig manure, corn stalks, and cow dung [122][123][124].It has also resulted in the enhancement of the humification process [125], at reducing ammonia emissions [122,126], facilitating heavy metal passivation [125], and improving the quality of the final product [122,124].Furthermore, using composted A. bisporus SMS alone as a substrate for
Flammulina velutipes
Fresh or sterilized SMS (5%, w/w) mixed with soil in glass jars Total and dissolved OC, microbial biomass carbon and nitrogen, abundance and diversity of bacteria and fungi, and enzyme activities were enhanced [111] A. bisporus SMS (45 and 85 ton ha −1 ) mixed with soil in pots SMS promoted the presence of fungi in the highly connected fraction of the active microbial community [112] Auricularia auriculajudae Composted SMS, biogas residues and pig manure 1:1:1 in seedling pots Better seedling quality was obtained by using the SMS-based substrate than with the commercial seedling substrates [113] Volvariella volvacea Fresh, weathered, and carbonized SMS mixed with soil (1:2) combined with 0, 50 or 100% of the required rate of nitrogen fertilizer in pots Weathered and carbonized SMS increased available N; fresh SMS immobilized various nutrients; high yields of pechay during first and second crop on weathered and carbonized SMS; fresh SMS led to high yields only during the third crop; yield was increased by N fertilizer only in weathered and carbonized SMS treatments [114] A. bisporus ) in pots SMS (as the sole fertilizer source) improved grass (Lolium multiflorum) yield up to 300% (with a concentration/dependent response) compared to the untreated control (with no NPK fertilization) [115] NR SMS used to supply 50% or 100% of the crop's nitrogen requirements In contrast to mineral fertilizers, no increase in salt content was recorded when SMS was applied; similar lettuce and leek yields when either SMS or mineral fertilizers were used [116] P. ostreatus Fresh SMS incorporated (15-20 t ha −1 ) during a period of four years to a depth of approx.10 cm SMS led to increase in porosity and fractal dimension, and caused strong development of a granular microstructure in the A horizon (15-20 cm) and a spongy structure in the B horizon (45-50 cm and 70-75 cm) [117] A. bisporus, and A. bisporus with P. ostreatus (1:1, v/v) SMS was incorporated to a soil depth of 30 cm, 1 month prior to planting; both organic treatments providing 100 kg/ha of N SMS amendment on a calcareous clayey-loam soil resulted in higher oxidizable OC, organic N, extractable K, and available P compared to soil fertilized by 100, 22 and 208 kg/ha N, P and K, respectively; the use of SMS provided lettuce yields similar to that obtained with mineral fertilizer [105] NR SMS and peat moss alone or mixed (1:1, 1:2, and 2:1 (v/v)) with or without NPK fertilizer SMS could replace up to 50% peat moss to support Chinese kale (Brassica oleracea) production; SMS alone cannot be used as growth medium because of its low nutrient content [118] A. bisporus, and A. bisporus with P. ostreatus (1:1, v/v) SMS-based treatments provided 100 kg ha −1 of N SMS increased the oxidizable OC, organic N, available P, respiration rate, and phosphatase activity, while it did not affect pH, EC, catalase, and urease activities in soil cultivated with lettuce [119] NR Fresh or composted SMS applied annually for four years at rates of 8 and 25 Mg ha −1 (d.w.) SMS led at increased OC, ΤΝ and labile organic forms as well as enhanced microbiological activity in a semiarid vineyard soil [120] Agaricus subrufescens and Lentinula edodes A. subrufescens SMS (5 to 40%, d.w.) and L. edodes SMS (5 to 25%, d.w.) mixed with soil in pots SMS led to increase of water retention and enhanced the soil microbial population; when supplemented by 10% of A. subrufescens SMS, lettuce dry weight increased by 2.2 and 1.3 times compared to the control and the NPK (44% N, 37% P2O5 and 48% K2O) treatments; fresh L. edodes SMS did not perform equally well [104] NR SMS distributed onto field plots with a manure spreader at rates of 22.5, 45.0, and 90 kg m −2 Corn yields were significantly higher in SMS-amended plots, and the nitrogen content of both grain and stover was significantly higher than the control [121] the cultivation of Lolium multiflorum resulted in a yield improvement by up to 300% compared to the reference of NPK fertilization [115].Cocomposting of Au. auricula-judae SMS with biogas residues and pig manure led to the production of higher quality seedlings than those obtained from commercial substrates [113].Substrates containing composted A. bisporus and P. ostreatus SMS resulted in increased yields of baby leaf lettuce, even in the presence of the soil-borne plant pathogen Pythium irregulare [108].Adopting appropriate methodologies, such as the addition of enzymes or earthworms (vermicomposting), during the composting process should result in further improvement of the quality of composted SMS by promoting the beneficial effects of autochthonous bacteria, increasing ionexchange capacity, decreasing total carbon and C/ N ratio, and promoting the synthesis of nitrates [127][128][129].
In conclusion, the use of SMS as soil amendment has beneficial effects on soil fertility and structure.SMS presents a promising potential for substituting, at least partially, the use of mineral fertilizers in continuous crops and thus contributes at mitigating soil secondary salinization and acidification, and at avoiding nutrient imbalances and accumulation of toxic allelochemicals.
Use of spent mushroom substrate for plant-disease control
To deal with the negative repercussions of using chemical-based pesticides in agriculture, the application of environmentally friendly products for pest protection is crucial.Biocontrol agents, including live organisms and biological pesticides, are potential alternatives for controlling plant diseases.In contrast to chemical pesticides, biocontrol agents have little impact on non-targeted organisms; they do not leave behind any longlasting harmful leachates and do not lead to the development of resistant microbial strains or insects.However, they often exhibit low-medium effectiveness and a shorter shelf life [130,131].
The bioactive compounds in SMS have antimicrobial properties [132], which could be exploited against plant pathogens.Although in vitro studies have shown the potential suitability of mushroom and mycelium extracts against plant pathogens [87,133,134], they do not necessarily reveal the in vivo effectiveness.SMS application has shown to be effective in suppressing plant disease incidence.Table 4 shows examples of reported research findings on using SMS for controlling plant pathogens and pests, and also includes the SMS origin, the plant -pathogen/pest system, and the main outcome of each study.
Several studies on SMS-based biocontrol products against plant diseases concern L. edodes.The in vitro antimicrobial activity of L. edodes SMS [87,151], was further evidenced when hot water extracts were used to inhibit the germination of Pyricularia oryzae conidia in rice plants and to suppress the growth of Phytophthora capsici in pepper plants [141,142].A chitin/cellulose nanofiber complex isolated from L. edodes SMS exhibited significant activity against Alternaria brassicicola in Arabidopsis thaliana plants [135].L. edodes SMS-based biocontrol agents reduced the disease symptoms and promoted plant growth [135,142].
P. ostreatus SMS can provide another alternative to suppress plant diseases.Paddy straw-based P. ostreatus SMS, bio-fortified with Trichoderma asperellum, led to a remarkable reduction of the severity index of Fusarium oxysporum-induced disease, while it contributed to enhanced tomato growth [137].The application of a polysaccharide extract from P. ostreatus SMS and discarded L. edodes mushrooms reduced by 50% the severity of bacterial spot caused by Xanthomonas gardneri in tomato cotyledons, leaflets, and five-leaf plants [139].Phenolic-rich extracts from P. ostreatus SMS have been shown to prevent the development of the parasitic plant broomrape in faba bean cultivars [152], and to improve the rice growth and yield parameters [153].In another study, mixing composted SMS from either P. ostreatus or V. volvacea with a biofertilizer exhibited higher control efficacy against Ralstonia wilt and Phytophthora blight diseases, than using the biofertilizer alone [138].
SMS from less widely cultivated mushrooms has also shown suppressive activity against plant diseases.Application of SMS from Ly. decastes and P. eryngii into soils used for cultivating cucumber resulted in protection against disease symptoms caused by Colletotrichum orbiculare, Podosphaera xanthii,
Cladosporium cucumerinum and
Pseudomonas syringae [147,149].A protective effect against Colletotrichum lagenarium in cucumber plants was observed after spraying a water extract of Ly. decastes SMS [149].The incorporation of Ly. decastes SMS into soil suppressed the lesions caused by Al. brassicicola in Arabidopsis thaliana leaves; this effect was attributed to SMS volatile components [136].
Agaricus bisporus
Tomato -Septoria lycopersici (leaf spot disease) Plants grown on SMS-containing substrates were resistant to infections caused by S. lycopersici [150] The results so far indicate that SMS richness in antimicrobial compounds in concomitance with its natural microbiome, including organisms suppressing soil-borne plant pathogens, are essential prerequisites for developing relevant plant-disease control products.However, further experimentation, including evaluation in large-scale greenhouse and field trials, is required to fully benefit from that potential toward a solid sustainable agriculture model.
Effects of SMS on nutritional value and secondary metabolites production in plants
Plant secondary metabolites, including vitamins, terpenoids and polyphenols, in fruits and vegetables are important for reducing risks of cardiovascular diseases and maintaining good health [154,155].Those molecules exert a wide range of effects on the plant and associated organisms, and their production depends on various biotic and abiotic factors [156].
SMS application affects the content of secondary metabolites in plants.Vahid Afagh et al. [157] reported that the incorporation of Agaricus SMS leachates in sandy soil (up to 15% (v/v)) significantly increased the content of essential oil, proline, and soluble sugars in chamomile (Matricaria chamomilla) in comparison to plants grown on non-supplemented soil.Increasing the SMS leachate content enhanced K and Na absorption, whereas N and P uptake was not affected.Similarly, the addition of SMS leachate (20-60% (v/v)) in the soil led to increased content of essential oil components, chlorophyll, and antioxidant compounds in chamomile [158].Application of SMS as an amendment in soils where basil (Ocimum basilicum) was cultivated, resulted in a two-fold increase in essential oil components, and in an enhancement of its content in microand macronutrients [159].
SMS use in the cultivation of vegetables demonstrated a wide range of effects on the various parameters, including product yield and quality.Applying a leachate of P. ostreatus SMS and A. bisporus SMS (10-25% (w/w)) to the soil increased the content of chlorophyll in pepper leaves, and that of carotenoids and protein in fruits [160].Furthermore, A. bisporus SMS biofortified with Trichoderma harzianum inhibited lipid peroxidation and protein oxidation with a significant increase in total polyphenol and flavonoid contents in tomatoes, and enhanced Fe 2+/ Fe 3+ chelating activity and superoxide anion radical scavenging activity compared to an SMS-free control [161].Similarly, P. ostreatus SMS biofortified with Trichoderma asperellum improved morpho-biochemical and nutritional parameters, such as the content of chlorophyll, carotenoids, total soluble sugars, total soluble proteins, lycopene, β-carotene, and ascorbic acid, and antioxidant properties, of tomato plants [137].Another study, using SMS from A. bisporus or P. ostreatus for replacing peat moss by 25-100% (w/w), reported that the effect of SMS on the macronutrient content of tomato, courgette, and pepper plants was speciesdependent [162].A proportional increase of N content with the increase of SMS ratio in the substrate was observed for pepper, whereas no significant effect was evident for courgette and tomato.In addition, increasing the incorporation volumes of SMS increased K content for courgette and pepper, but not for tomato.Last, courgette and pepper exhibited similar P content when grown on SMS-based substrates and a peat control, whereas P content in tomato seedlings grown on SMS-based substrates was lower than in plants grown on peat.
Although the scientific data on the effects of SMS on the nutritional value of edible and medicinal plants are still limited, the available results reveal SMS potential to increase the content of specific elements and secondary metabolites in plants.
Spent mushroom substrate as source of enzymes and bioactive compounds
Producing enzymes and different bioactive compounds is a reasonable way of SMS valorization.SMS-derived enzymes are of interest in industrial sectors, such as brewing, baking, starchprocessing, leather, and textile industries, as well as in bioremediation and the emerging biofuel and biorefinery business.SMS-derived bioactive molecules have also the potential for application in the pharmaceutical, biomedical, feed, and food sectors.
Enzymes
SMS is a source of various enzymes that can be recovered by extraction with different solvent systems.Furthermore, SMS can be used as substrate for the cultivation of enzyme-producing microorganisms.
Recovery of enzymes from spent mushroom substrate
For growing on lignocellulosic biomass, white-rot fungi secrete hydrolytic and oxidative enzymes responsible for degrading complex polymers into low-molecular weight substances, which can be assimilated for fungal growth [163].The main groups of enzymes participating in fungal degradation of lignocellulosic materials are presented in Figure 2. Hydrolytic enzymes are responsible for deconstructing cellulose and hemicelluloses, while oxidative enzymes are involved in lignin degradation [7].Consequently, upon the end of cultivation, SMS contains extracellular fungal enzymes, such as ligninases, cellulases, and hemicellulases, that can be recovered using different extraction procedures.The level of enzyme activities and their corresponding titers depend on the growth substrate and the fungal species' ability to degrade different lignocellulose components.For example, since white-rot fungi degrade lignin and hemicelluloses preferentially, extracts of their spent substrates are rich in ligninases and xylanases, while cellulase activity is hardly detected.
The enzymatic systems present in SMS of various fungal species make possible their application for different purposes.For example, P. ostreatus SMS can be applied for decolorizing textile effluents because it contains oxidoreductases that degrade the dye molecules [164].Similarly, the laccase and manganese peroxidase activities of P. pulmonarius SMS allow its direct application to remove polycyclic aromatic hydrocarbons from contaminated soil samples [165].However, rather than directly using the bulk SMS, many applications require using isolated enzymes that can be recovered from SMS.
Table 5 shows an overview of studies on the recovery of extracellular enzymes from SMS of various fungal species.The spent substrates of A. bisporus and oyster mushrooms (Pleurotus spp.) are commonly reported as sources of extracellular enzymes.Xylanases and cellulases are the most common hydrolases in the recovered enzymes, while laccases are the main reported oxidoreductases.Most studies provide a relatively detailed description of the extraction process used, while purification protocols, e.g.dialysis, ultrafiltration, anion-exchange chromatography, or gel filtration, of the extracted enzymes are not always described in detail.Production of crude enzyme extracts and their application in areas where expensive purification can be avoided is often reported [164,181,182].Some studies provide the exact identification of the extracted enzymes, including the complete EC classification number, while other provide trivial names or a more general classification without stating details.
Ganoderma lucidum
Laccase Sodium acetate buffer (pH 5.0), liquid-solid ratio 5, 4°C, 3 h Partial purification by ammonium sulfate precipitation and dialysis [180] Recovery of extracellular enzymes from SMS was reported for the first time by Ball and Jackson in 1995 using A. bisporus spent compost [166].In that study, it was found that lignocellulose-degrading enzymes can be recovered from spent mushroom compost by extraction with distilled water [166].The evaluation of the enzyme activities revealed high levels of hemicellulases (endoxylanase, β-xylosidase, xylan-acetylesterase, and arabinofuranosidase), cellulose-degrading enzymes (endoglucanase, cellobiohydrolase, and β-glucosidase), and ligninolytic enzymes (peroxidase and phenoloxidase).The activity and stability of the enzymes suggested their potential for the biological upgrading of wheat straw.After Ball and Jackson's pioneering study, A. bisporus SMS has been studied frequently to recover enzymes by extraction with different solvent systems [167][168][169].Trejo-Hernandez et al. [167] reported laccase extraction with Tris -HCl buffer, while Mayolo-Deloisa et al. [168] developed a protocol using an aqueous potassium phosphate-polyethylene glycol two-phase system for purification of laccase extracted from A. bisporus SMS.Devi et al. [169] recently reported the recovery of oxidative and hydrolytic enzymes by suspending A. bisporus SMS in sodium citrate buffer, followed by acetone precipitation and subsequent chromatographic purification.The partially purified enzyme extract was evaluated on hydrolysis of SMS polysaccharides for ethanol production.
Recovery of enzymes from SMS resulting from the cultivation of mushrooms of the genus Pleurotus has been well investigated.The first studies were published in the early 2000s, when different solvent systems, including water, sodium citrate buffer, and sodium phosphate buffer, were evaluated to extract hydrolases and oxidoreductases from SMS of P. sajor-caju SMS [170] and P. ostreatus [171], the latter also included SMS of other species.In other studies, different buffers and conditions were evaluated for extraction of αamylase, endoglucanase, laccase, and endoxylanase from SMS of P. eryngii, P. ostreatus, and P. cornucopiae; the best recoveries were achieved using sodium citrate buffer [172,173].Sadiq et al. [174] used a sodium tartrate buffer for extracting manganese peroxidase (MnP), laccase, and lignin peroxidase (LiP) from P. ostreatus SMS and used the extract for bioremediation of contaminated soil.The SMS of P. florida [36,44] has also been reported as a source of lignin oxidases (versatile peroxidase (VP), MnP, LiP, and laccase) and polysaccharide hydrolases (CMCase, xylanase, and cellobiohydrolase).Crude extracts of P. pulmonarius SMS demonstrated laccase and MnP activity [163].P. pulmonarius SMS was also used to extract several hydrolases and oxidoreductases, and the extract was used for the hydrolysis of palm oil mill effluent to produce biohydrogen [176].Extraction of xylanases from P. ostreatus and P. citrinopileatus SMS has also been investigated [177].
The potential for enzyme recovery from SMS of other mushroom species has also been investigated.For example, Schimpf and Schultz (2016) screened selected enzyme activities in SMSs of L. edodes, He. erinaceus, Stropharia rugosoannulata, Fomes fomentarius, and Gr.frondosa and developed a protocol for recovery of lignocellulolytic enzymes from L. edodes SMS [178].In another study, SMS from the cultivation of Au. auricula-judae, Coprinus comatus, Agrocybe cylindracea, He. erinaceus, and H. marmoreus have also been investigated as a source of xylanases [177].Screening of enzymes extracted from SMS of L. edodes, H. marmoreus, F. velutipes, and three Pleurotus strains revealed higher activity of cellulose-degrading enzymes for L. edodes extract, while the extracts of Pleurotus strains displayed higher laccase activity and ability to decolorize Coomassie Brilliant Blue [179].
Enzyme preparations with high xylanase activity were obtained from extracts from Tremella fuciformis SMS purified by ammonium sulfate precipitation and gel filtration chromatography [177].The purified enzyme showed good thermal stability and potential for saccharification of xylan contained in wheat bran, sugarcane bagasse, and other biomass residues.Optimal conditions for laccase extraction from G. lucidum SMS and utilization of the extract to remove toxic chemicals from an aqueous environment have also been reported [180].
Using SMS as substrate for cultivation of enzyme-producing organisms
Since SMS is rich in nutrients and contains potential carbon sources, it can be used as a substrate for producing enzymes by cultivating enzymeproducing organisms.SMS has been used to cultivate fungi of the genus Trichoderma, the most relevant for industrial production of cellulases [183].Production of cellulases requires a cellulosic substrate for inducing the enzyme system Trichoderma spp., which consists of endoglucanases, exoglucanases, and β-glucosidases [184].Cellulose contained in lignocellulosic materials is a more suitable inducer than other alternatives, which are too expensive for industrial-scale use.Before cultivation of a cellulase producer, lignocellulose requires being pretreated, for example by a hydrothermal process [185], to remove lignin and facilitate enzyme access to cellulose.A drawback of hydrothermal pretreatment is that it leads to the formation of by-products, such as furan aldehydes, aliphatic acids, and phenolic compounds, which are inhibitors of microorganisms and enzymes [186].Using SMS avoids the downsides of pretreatment since -during mushroom cultivationlignin and part of the polysaccharides are degraded without forming inhibitors [187], and, therefore, the substrate is prepared for being used in microbial fermentations.
Enzyme production by microorganisms cultivated on SMS has been less investigated than the extraction of enzymes from SMS not subjected to a new cultivation cycle.Pleurotus spp.SMS are among the most common ones to produce enzymes by other organisms (Table 6).Some studies report using SMS as substrates for conventional enzyme producers, while in other studies, the enzymes are produced by edible mushrooms cultivated on SMS.
Trichoderma spp.are among the most common conventional enzyme producers cultivated on SMS, but there are also some reports on Aspergillus and Penicillium spp.In a recent study, He et al. [188] reported the production of cellulases by T. reesei grown on corn cobs-based SMS from Au. polytricha, Auricularia nigricans, and P. ostreatus.In that study, cellulase production was more effective when using earlier 'flushes' of SMS than when several harvests were produced on the same substrate.The highest cellulase activity was obtained using the third flush of mushrooms of Au. polytricha SMS, particularly when the fermentation process was assisted with ultrasound.The study showed that higher cellulase activity could be obtained by cultivation on untreated SMS than on acid-or alkali-treated SMS.The potential of spent mushroom compost (SMC) of A. bisporus for cultivation of enzyme-producing fungi has also been shown.Production of endoglucanase, endoxylanase, and β-glucosidase using Trichoderma isolates and a strain of Aspergillus niger on A. bisporus SMC without nutrient supplementation was reported [189].SMS resulting from growing P. sajor-caju on sugarcane bagasse was used to produce cellulases and xylanases by Penicillium echinulatum [190].
The production of enzymes by cultivating edible mushrooms on SMS has also been reported.P. ostreatus SMS supplemented with wheat bran and soybean flour was a suitable substrate for the cultivation of P. ostreatus, P. pulmonarius, Ganoderma adspersum, Ganoderma resinaceum, and L. edodes for producing laccase [191].The study showed good potential of the supplemented SMS for laccase production by Ganoderma spp.and fruitbodies by Pleurotus spp.In another study by the same group, laccase was produced by cultivating P. ostreatus and P. pulmonarius on P. ostreatus SMS, and the crude enzyme's potential for removing phenolic compounds from olive mill and winery wastewaters was evaluated [26].
Another approach is cultivating enzyme producers in SMS that has already been subjected to the extraction of extracellular enzymes.This approach has been tested for P. pulmonarius SMS, which was first subjected to extraction of lignindegrading enzymes, and then used as substrate for producing cellulases by Trichoderma asperellum cultivation [163].It was also applied for P. florida SMS, which was first used as a source of laccase and several hydrolases, and then directed to the production of cellulases by either Trichoderma longibrachiatum and Aspergillus aculeatus [175].T. longibrachiatum resulted in higher activity of endoglucanase, exoglucanase, and xylanase, while As.aculeatus was a better cellobiase producer (Table 6).
Bioactive compounds
SMS contains bioactive compounds of different functionality and origin.The fungal mycelium contains polysaccharides, sterols, proteins, polyphenols, vitamins, and other bioactive molecules.Mycelial growth throughout the surrounding environment also results in the secretion of potentially useful bioactive compounds.In addition, the extractive fraction of the lignocellulosic substrate and the oligomeric products from fungal degradation of polysaccharides and lignin might also be sources of bioactive substances.However, while the bioactive molecules of the sporocarps of edible fungi have been extensively investigated [192], the information on the bioactive potential available in SMS is still limited.Recovery of bioactive compounds is a promising direction for valorizing SMS.
Polysaccharides
Polysaccharides are among the bioactive substances responsible for the immunomodulatory and antitumor effects of edible and medicinal mushrooms [193].However, in most of the research dealing with fungal polysaccharides the investigated sources are fruitbodies or mycelia [194], while extraction from SMS has been less explored.A study on extraction and characterization of a polysaccharide from L. edodes SMS published in 2012 by Zhu et al. provided the start for the research on SMS as a source of bioactive molecules [195].Henceforth, several relevant reports on obtaining bioactive extracts from SMS have been published.Most publications deal with L. edodes, Pleurotus spp., and Ganoderma spp., but SMS from the cultivation of other fungal species has also been investigated (Table 7).
Water extraction at temperatures around 80-90°C, followed by alcohol precipitation, and chromatographic purification, is a standard procedure for recovering polysaccharides from SMS. Accordingly, a heteropolysaccharide displaying antibacterial activity against three different microorganisms was recovered from L. edodes SMS [195].The same method, combined with partial hydrolysis, either chemical [57] or enzymatic [196], was applied to L. edodes SMS for extracting polysaccharides showing antioxidant, antiinflammatory, and renoprotective activities.Water extraction has also been reported to extract polysaccharides from G. lucidum SMS [197], and to extract β-glucans and other compounds from rice husk-based SMS of Pycnoporus sanguineus and Panus strigellus (syn.Pleurotus tubarius) [198].
Extraction with aqueous alkaline solutions is another useful method for recovering polysaccharides.He et al. [199] reported the obtention of a polysaccharide extract from P. eryngii SMS by alkaline extraction followed by deproteinization and gel filtration chromatography.The refined product was a polysaccharide-protein complex containing 99% (w/w) of a polysaccharide composed of anhydroxylose, anhydroglucose, and anhydroarabinose units.Strong antioxidant activity -with potential food applications -was revealed in vitro for the polysaccharide-protein complex.A comparable extraction approach has also been used to recover polysaccharides from L. edodes SMS [193].Exhaustive characterization revealed that the L. edodes SMS extract contained heteropolysaccharides exerting antiproliferative effects against six tested human tumor cell lines.
Subcritical water extraction (SWE) can be applied to extract bioactive molecules.SWE of polysaccharides from P. ostreatus SMS and L. edodes residual basidiocarps by autoclaving at 120°C has been reported [139].
Partial enzymatic hydrolysis can also be used to extract polysaccharides from SMS. Hydrolysis with cellulases for two hours was used for recovering polysaccharides from the SMS of C. militaris [200].Four polysaccharide fractions were isolated, and three displayed good antioxidant activity with no cytotoxicity.Enzyme treatment has also been used to recover polysaccharides from SMS of Ag. cylindracea, L. edodes, H. marmoreus, P. ostreatus and C. militaris [201].The polysaccharides were isolated from the extracts by ethanol precipitation and purified by deproteinization with the Sevag regent, and their antioxidant activity was evaluated in vitro.The polysaccharides from Ag. cylindracea SMS had the best oxygen free radical-scavenging capacity and ferric reducing power (FRAP), while those from Hy. marmoreus and P. ostreatus displayed the best ABTS and DPPH radical scavenging activities.
Another way of producing chemical compounds of interest is to use SMS as a substrate for cultivating other organisms.For instance, P. ostreatus SMS was reported to be used for producing crude exopolysaccharides by cultivation of P. ostreatus and P. pulmonarius [191].
Sterols and other compounds
Ergosterol, the most abundant sterol in fungi, has relevant biological activities for food, pharmaceutical, and biomedical uses, and it is a precursor of vitamin D 2 .Most of the reports on ergosterol extraction from mushroom residues deal mainly with stipes of fruitbodies or mushrooms not meeting commercial specifications [203].However, the potential of L. edodes SMS as a source of ergosterol has recently been demonstrated [58].Ergosterol-rich extracts were obtained from L. edodes SMS using ultrasoundassisted extraction, a non-conventional technique for extracting natural products from various biomaterials; in vitro experiments revealed that the produced extracts have antitumor activity against three cancer cell lines.
The presence of steroids and saturated terpenes has been shown in water extracts of SMS from the cultivation of Py. sanguineus and Pa.strigellus (syn.P. tubarius) SMS on rice husk [198].The purine analog pentostatin, a potent anticancer drug, was produced by cultivating a cellulosedegrading transformant of C. militaris using H. marmoreus SMS as substrate [202].
Phenolic compounds can also be extracted from SMS. Elsakhawy et al. reported the production of phenol-rich extracts from P. ostreatus SMS using either 0.5 N NaOH [152] or tap water [153] as solvents.The produced extracts were further assayed as plant-disease control and biofertilizer.
Spent mushroom substrate valorization as part of cascade use of plant biomass
The generation of plant biomass resources by agriculture and forestry takes a long time and requires considerable land areas; thus, their utilization should be rational and efficient.Residual biomass materials, such as side/waste streams or byproducts from varying stages of production/processing chains, contain components of high potential for value-added applications.A common approach for biomass valorization today is to burn it in a resource-inefficient way to generate heat and power for energy purposes.For bioeconomy development in a resource-efficient way, cascade use of plant biomass should always be considered.Cascade use, also known as cascading use [204], is a complex interaction of material flows used as a strategy to increase resource efficiency in biomass processing.Cascade use occurs when biomass is processed through a series of material uses (Figure 3), by reuse and recycling, before finally being used for energy recovery [205].
Cultivation of edible mushrooms plays a unique role in supplying highly nutritive and healthpromoting food.Still, it generates vast amounts of SMS, which is mostly discarded or inefficiently used despite its potential for generating valueadded products.As a cellulose-rich bioresource, SMS can also be considered a material of interest for developing sugar-platform applications after enzymatic saccharification.Compared with the lignocellulosic materials used to formulate the initial mushroom substrate, SMS is more susceptible to biochemical conversion using enzymes and microorganisms [16,187,206].That is mainly because the cultivation of white-rot edible fungi constitutes a biological process that modifies lignocellulose by removing a large part of lignin and hemicelluloses, which interferes with the enzymatic saccharification of cellulose.Furthermore, SMS, i.e. a material resulting from the aforementioned process (which could be considered as a biological pretreatment), contains few inhibitory compounds or external chemicals that might negatively affect downstream processing or harm the environment [187,206].
Applying cascade uses to processing plant biomass by mushroom cultivation combined with SMS valorization through biochemical conversion and other approaches, is expected to maximize the costeffectiveness of a value chain of variable potential products.The cascade-use concept also results in minimizing resource loss and environmental impacts.Following a cascading approach, SMS, as the primary by-product of mushroom cultivation can be re-used as raw material for new processes, extending total biomass availability within the system.That is a rational approach, where different valuable biomass constituents are recovered and converted into value-added products.Energy uses of residual biomass are considered only at the end of the life cycle when all higher-value products and services have been exhausted.There are different possible examples of multi-stage cascading uses for SMS valorization.Three promising published case studies are discussed in this section.
Case study 1: food -ethanol -solid fuel
The integrated production of L. edodes mushroom (food) and biofuels from hardwood residues can be an example of cascade use [16,187,206,207].Food (mushrooms) is produced on a lignocellulosic substrate.Concomitantly, mushroom cultivation selectively degrades lignin and hemicelluloses, thus facilitating the enzymatic saccharification of cellulose.Glucose from the saccharification process can then be fermented to ethanol using yeast.Enzymatic saccharification also generates ligninrich solid leftovers, which can be used as a solid fuel (Figure 4).
From a circular bioeconomy point of view, the forest residues can be considered primary bioresources from forest production (c.f. Figure 3).Exploiting the forest residues as mushroom growing substrates results in the production of fruitbodies as primary products.SMS is the secondary bioresource and can be converted to the secondary product ethanol.Ethanol can be used to synthesize renewable polyethylene to produce green plastics or for fuel applications, including advanced jet biofuels.After cellulose saccharification, the solid leftover can also be recovered as a tertiary bioresource/biowaste, and converted to solid fuel, a 'tertiary product.'A recently published mass balance analysis revealed that one ton of birchbased initial mushroom substrate might result in about 600 kg of fresh shiitake (L.edodes) fruitbodies (90% moisture), 130 liter of ethanol, and 300 kg (dry mass) of solid biofuel [187].This system/ approach can also be applied to other mushroom species.Using an experimental setting like the one used for L. edodes, Chen et al. [209] found that one ton of birch-based initial substrate might result in about 400 kg of fresh fruitbodies (90% moisture) of Au. auricula-judae, 35 liters of ethanol, and 300 kg dry mass of solid biofuel.The solid fuel was found to have a relatively high calorific value and favorable characteristics for direct combustion to produce heat.The generated heat can be used for the pasteurization of substrates or space heating.
The outcome of the production chain can be affected by the composition of the initial substrates used for mushroom cultivation.Chen et al. [206] reported that alder-based substrate led to 4% more mushroom fruitbodies, 14% more ethanol, and 23% more solid fuel than birch-based substrate.On the other hand, an aspen-based substrate resulted in a 37% lower yield of fruitbodies than the birch-based one, although the yields of ethanol and solid fuel were comparable for substrates from both tree species.
The same concept is also applicable to other biofuels.For instance, another 'food -biofuel -solid fuel' alternative is to produce biogas instead of ethanol as a secondary product.Lin et al. [210] cultivated shiitake on woodchips and produced biogas by anaerobic digestion (AD) of SMS; at the end, around 53-57% (dry mass) of the substrate was solid leftover.Since the AD process consumes mostly carbohydrates [211,212], the leftovers are expected to have a relatively high content of lignin, a component with a high calorific value.Therefore, using the leftovers as a solid fuel for a self-supporting heating system could be meaningful.In a slightly different alternative, rice straw was used as the main ingredient for P. ostreatus substrate, fruitbodies were produced as the primary product, SMS was directed to AD for producing biomethane as a secondary product, and the AD digestate was used as biofertilizer (tertiary product) for rice cultivation [213].
Case study 2: food -biogas − 2 nd cycle mushroom
Another example of a cascade system was reported by Ikeda et al. [214] using the mushroom 'enokitake' (F.velutipes) cultivated in a substrate based on corncobs supplemented with rice bran.In this case, agricultural residues were the primary bioresource, and enokitake fruitbodies were the primary product.The SMS resulting after mushroom harvest was anaerobically digested for producing biogas, the secondary product.After the AD process, around 45% of the initial mass was left as solid residue or digestate.In the next step, KOH or NaOH was used for pretreating the AD residue (SMS-ADR), which was then mixed with rice bran at a 50:50 weight ratio for formulating a new substrate to be subsequently used for a second enokitake cultivation cycle.The results were promising: the yield of the tertiary product, i.e. mushrooms cultivated on SMS-ADR, was comparable to those of the primary one, i.e. mushrooms cultivated on corncob-based 'standard' substrate.Crude protein, ether-extracted compounds, crude fiber, minerals (Na, P, Ca, K, Mg), and free amino acids in fruitbodies showed similar content to those obtained from the standard substrates.The study did not discuss further use of the second cycle SMS II).In our opinion, this cascading system could still be extended to a quaternary product by exploiting the potential of SMS-II as a biofertilizer or a soil amendment.
Cascade systems including biogas as the secondary product are also feasible for other mushroom species.Since lignin content decreases during the mushroom cultivation, the resulting SMS is accessible to anaerobic microbes, thus facilitating AD conversion of carbohydrates to biogas.On the other hand, after biogas production, and although data on the chemical composition of the digestate are not available [215], the lignin content is expected to increase due to its recalcitrance to degradation by anaerobic bacteria [211,212,216].Therefore, it was wise to choose white-rot fungi again to break the recalcitrance of lignin to produce additional value-added tertiary products.Although the low pH and the presence of unknown by-products may inhibit a second mushroom cultivation cycle, it was shown that KOH or NaOH soaking was a viable method to improve the susceptibility of digestate to be further used for enokitake cultivation [214].Nevertheless, the precise mechanism behind the alkaline reactivation of the digestate from AD for mushroom cultivation remains to be clarified.
Case study 3: food − 2 nd cycle mushroomenzymes
Another possible cascading chain can include two cycles of mushroom cultivation in a row followed by recovery of extracellular enzymes as a tertiary product.Economou et al. [191] reported a case study, where oyster mushroom (P.ostreatus) was produced on a wheat straw-based substrate, and the resulting SMS (SMS-I) was tested as the main ingredient of the substrate for a second mushroom cultivation cycle.After harvesting the fruitbodies from the second production cycle, the generated SMS (SMS-II) was used as a source for the recovery of the lignin-degrading enzyme laccase.Among the five fungal species tested for the second mushroom production cycle, P. pulmonarius resulted in the SMS providing the highest yield of laccase, 2465 U g −1 day −1 (dry mass based).The crude laccase extract was then used for the dephenolization of wastewaters [26].The authors did not explore possible uses of the solid stream remaining after laccase extraction from SMS-II.A potential extension of the cascading system would be possible by using that stream as either biofertilizer or solid fuel.
Using P. ostreatus as the species involved in the first step of the cascading system is a reasonable strategy considering that Pleurotus spp.are among the most studied white-rot fungi for biological treatments of lignocellulosic materials [217].Compared with other edible fungi, they have the advantages of presenting a relatively shorter life cycle, and broader adaptation to substrate assortments and growing environments.Even though Pleurotus lignocellulolytic enzyme activities are generally comparable to those of L. edodes [215], the lignocellulose degradation capacity of the former is generally lower than that of shiitake, probably because of the shorter life cycle [217].It must be emphasized that the determination of SMS composition, which is essential for fully understanding the potential of the SMS to be directed to new cultivation cycles, is often underestimated in the literature.
The cultivation of white-rot edible fungi on primary bioresources results to food (mushrooms), and functions as a biological pretreatment for facilitating biochemical conversions.Therefore, mushroom cultivation is crucial in cascading systems of lignocellulosic biomass utilization.In addition to the case studies discussed above, many other cascade systems producing fruitbodies as a primary product, and including other products or services, can be proposed to valorize SMS and other residual streams.The feasibility of producing antibiotics [218], antitumor sterols [58], seedbed of vegetables [219], fertilizers [104], soil bioremediation agents [220], enzymes [175], biochar [221], and other products has been demonstrated.Some products could be considered as different 'puzzle pieces' to be chosen and integrated into a chain of cascade uses.However, appropriate approaches ensuring optimal process integration remain to be developed.Process integration has to be developed through interdisciplinary approaches to maximize system values for the circular bioeconomy and the protection of the environment.Furthermore, systematic evaluation (e.g.life cycle assessment) and optimization of processes for the cascading uses must be addressed.
Future directions and conclusions
Cultivation of edible and medicinal mushrooms is a very dynamic business, with an impressing development during the last decades.However, increased mushroom production leads to the generation of high quantities of spent mushroom substrate (SMS).The accumulation of non-used SMS, or its limited or not high-added value applications, undermines the future of pertinent commercial activities.Therefore, achieving an efficient valorization of SMS -beyond its current low-value useis of paramount importance for the sustainable development of the mushroom industry.The research discussed in this review shows the vast potential of SMS as a source of valuable products and services.
The presence of valuable nutrients and energy sources for supporting new cultivation processes make SMS a suitable substrate component for new mushroom cultivation cycles provided that a suitable treatment or supplementation is applied.Reusing SMS in new cultivation of mushrooms of either the same or other species has significant potential for reaching high yields in an environmentally sustainable way and at the same time contribute to the reduction of production costs.The high nutritional value of SMS could also be exploited for the development of new feeds; the output of recent experimental work convincingly shows the feasibility of including SMS in the diets of poultry, ruminants, and monogastric animals, as well as beyond traditional husbandry, in pisciculture and insect farming.However, making SMS a regular diet ingredient poses complex challenges related to its fiber content and digestibility, and the acceptance by the animal.Recent research has faced those downsides since it has been demonstrated that by application of appropriate treatments the nutritional and acceptance features of SMS can be enhanced.
SMS's physical properties and chemical composition support the development of novel environment-friendly and cost-effective bio-based products, which can be used as part of sustainable agronomic practices in substituting fossil-based fertilizers and synthetic pesticides.Well-designed experiments have shown that SMS application as a soil amendment or fertilizer has beneficial effects on soil fertility and structure, without causing secondary salinization or acidification.The reported research also shows the potential of SMS as source of products for biological control against plant diseases, and its favorable effect on the production of secondary metabolites in plants and at enhancing the nutritional value of the fruits and vegetables.Scaling up the experimentation to large-scale greenhouse and field trials is required.Furthermore, increasing demonstration actions are expected to fully demonstrate/evidence the potential of SMS within a sustainable agriculture model.
SMS contains extracellular enzymes secreted during fungal growth and used to degrade the substrate's macromolecules.Those enzymes make it possible to use the SMS in services such as decolorization of textile effluents, bioremediation of contaminated soil, and wastewater treatment.Enzymes can also be extracted from SMS using different solvent systems.Furthermore, the potential of SMS as substrate for the cultivation of enzyme-producing microorganisms has been shown.The crude extracts of SMS enzymes can be subjected to various degrees of purification rendering refined preparations suitable for addedvalue applications, where enzyme purity is a decisive criterion.
Several publications report on exploiting SMS bioactive compounds for various uses.However, the extraction of bioactive compounds from SMS is just an emerging area of high interest.Recently published results indicated that SMS bioactive molecules could be used as added-value, sustainable, bio-based ingredients in socially-sensitive business sectors.SMS-derived nutraceuticals, food supplements, functional foods, and active ingredients might be the foundation of a new 'nextgeneration mycotherapeuticals' sector.That would require developing appropriate protocols for extracting bioactive molecules from SMS, a task that faces major challenges regarding the extraction's effectiveness without affecting the properties of the molecules of interest, and by avoiding the degradation of non-targeted compounds that might also be of interest.
Applying the cascade-use concept to SMS valorization is essential to increase resource efficiency in biomass processing and mushroom production.Arranging different alternatives of SMS utilization in cascading systems, where mushroom production is included as the primary process and the byproducts are converted to value-added products, will result in a value chain with minimal resource losses and with no adverse environmental impact, in agreement with the principles of sustainable development.Appropriate implementation of the cascade-use concept requires significant efforts to ensure optimal process integration based on interdisciplinary approaches.By achieving it, the system values can be maximized, and the extensive use of SMS for generating high-value products and services within a circular bioeconomy scenario can become a reality.
Figure 1 .
Figure 1.Factors affecting cultivation parameters and the use of SMS in new mushroom crops.
Figure 2 .
Figure 2. Enzymes participating in fungal degradation of lignocellulosic substrates.
Table 1 .
Reuse of spent mushroom substrate (SMS) for the cultivation of various mushroom species as reported in pertinent publications: origin and composition of SMS, mushroom to be cultivated, new substrate formulation and supplements, biological efficiency (BE) reported for the crop obtained and main comments on the results of the respective study.Abbreviation used: NR, not reported.
Table 2 .
Reuse of spent mushroom substrate (SMS) as animal feed based on the outcome reported in pertinent publications.Abbreviations used: ADF, acid detergent fiber; ADL, acid detergent lignin; HWE, hot water extract; NDF, neutral detergent fiber; NR, not reported; S-SMS: steam-treated SMS; UN-SMS: untreated SMS.Alpine dairy goats Rice straw fermented with P. sajorcaju SMS (5:1 w/w for 8 weeks), diet and water provided ad libitum,
Table 3 .
Reuse of spent mushroom substrate (SMS) as soil amendment based on the outcome reported in pertinent publications.Abbreviations used: NPK, nitrogen, phosphorus, potassium NR, not reported; OC, organic carbon; OM, organic matter; PGPB: plant growth promoting bacteria; TN, total nitrogen.Higher yields of baby leaf lettuce, i.e. 3-7 times more than that obtained by peat (even under the pressure of the soil-borne plant pathogen Pythium irregulare)
Table 4 .
Reuse of spent mushroom substrate for the control of plant pathogens and pests based on the outcome reported in pertinent publications.Abbreviations used: ACT, aerated compost tea; CT, compost tea; NCT, non-aerated compost tea; NR, not reported.
Table 6 .
Production of enzymes by fungal cultivation on spent mushroom substrate.Abbreviations used: SSF, solid-state fermentation; SmF, submerged fermentation. | 2023-09-07T06:17:10.477Z | 2023-09-05T00:00:00.000 | {
"year": 2023,
"sha1": "131e948f3f7f246306b68f0ba229c692a62b11e9",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2023.2252138?needAccess=true&role=button",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36c3e14edc4ba44ccf6f417330c3e809edd9d64b",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55592261 | pes2o/s2orc | v3-fos-license | A CHOICE OF SOBOLEV SPACES ASSOCIATED WITH ULTRASPHERICAL EXPANSIONS
We discuss two possible definitions for Sobolev spaces associated with ultraspherical expansions. These definitions depend on the notion of higher order derivative. We show that in order to have an isomorphism between Sobolev and potential spaces, the higher order derivatives to be considered are not the iteration of the first order derivatives. Some discussions about higher order Riesz transforms are involved. Also we prove that the maximal operator for the Poisson integral in the ultraspherical setting is bounded on the Sobolev spaces. 2000 Mathematics Subject Classification. Primary: 42C05; Secondary: 42C15.
Given β > 0 we consider the fractional integral L − β 2 λ , see (6).The ultraspherical potential space, L p λ,β , is defined as the range of the operator L − β 2 λ on the space L p (0, π).We endow L p λ,β with the norm induced by the usual norm in L p (0, π), that is, λ g and g ∈ L p (0, π).Given m ∈ N, the ultraspherical Sobolev space, W p λ,m , is defined as follows We shall write in an abridge form ( 4) We make the convention D (0) = Id.Along the paper the parameter λ is a fixed number, hence we believe that the no appearance of λ in D (k) doesn't produce confusion to the reader.We equipped W p λ,m with the norm || • || W p λ,m given by ( 5) We study some properties of the spaces L p λ,m and W p λ,m that lead to the main result of the paper.Theorem 1.Let λ > 0, 1 < p < ∞ and m ∈ N. Then W p λ,m = L p λ,m .Simplifying matters, the key behind the proof of this theorem is to establish an inequality of the type D (k) f p ∼ L k/2 λ f p , or equivalently D (k) L −k/2 λ g p ∼ g p .In other words, two facts have to be proved: first, the boundedness in L p (0, π) of the operators D (k) L −k/2 λ ; secondly, a certain inverse process that roughly gives g p ≤ D (k) L −k/2 λ g p .For the inverse process we use auxiliary operators, see Proposition 5.In the proof of these boundedness we use, among other tools, a Muckenhoupt multiplier transplantation result, see Lemma 1.The operators D (k) L −k/2 λ play the role of the "higher order Riesz transforms".These last thoughts are contained in Section 3.
Other higher order Riesz transforms associated with the operator L λ , were defined in [5] , where and D λ is the operator defined in (1).These Riesz transforms would suggest to define the Sobolev space as the subspace of functions f ∈ L p (0, π) such that D k λ f ∈ L p (0, π), k = 0, 1, . . ., m.This Sobolev space is denoted by W p λ,m and endowed with the norm given by In Section 4 and continuing with this line of thought, we ask if a result like Theorem 1 is possible for these Sobolev spaces, namely are the spaces W p λ,m and L p λ,m isomorphic?We prove that although the Riesz transforms, , are bounded in L p (0, π), the inverse process that is needed for a theorem like the Theorem 1 doesn't work.In fact the Theorem 2 below will be proved.We should mention here that we give a different proof of the boundedness of the Riesz transforms from the one given in [5].
Theorems 1 and 2 suggest that the adjective "Sobolev" has to be given to the spaces W p λ,m .Finally some properties of these Sobolev spaces W p λ,m are analyzed.In Section 3 it is shown that if W p m (I), I a real interval, denotes the classical Sobolev space on I, then ), for every 0 < a < b < π (Proposition 3).This fact and the procedure developed by Kinnunen [11], allow us to prove in Section 5 the following theorem about the maximal operator for the Poisson integral, P λ * .
Throughout this paper by C we always represent a suitable positive constant that can change in each occurrence.For every 1 ≤ p < ∞, we denote as usual by p ′ the conjugate exponent of p.
Negative powers of the operator L λ can be defined as ultraspherical multipliers as follows.If β > 0 the operator L −β λ is given by ( 6) where It is clear that, for every β > 0, L −β λ defines a bounded operator from L 2 (0, π) into itself.
Let S λ be the linear space generated by the system {ϕ λ n } n∈N of ultraspherical functions.This linear space plays an important role in our study.S λ is a dense subspace on L p (0, π), 1 ≤ p < ∞, see [19,Lemma 2.3].
λ is bounded and one to one on L p (0, π).
Boundedness of ultraspherical multipliers has been investigated by several authors (see [7], [15], [6] and [14]).In particular by using the following lemma one can prove the boundedness in This lemma is an easy consequence of [14,Corollary 17.11], and it will be useful in the sequel.
Proposition 1 gives sense to the potential spaces,
Ultraspherical Sobolev spaces
A standard procedure shows that the space Next we establish properties of these Sobolev spaces.
The function Our objective now is to prove Theorem 1. Previously we need to establish the boundedness of some operators.Lemma 2. Let λ > 0, 1 ≤ p ≤ ∞ and k ∈ N. Then the projector operator P k defined by is bounded on L p (0, π).
Proof: It is sufficient to note that where p ′ denotes the conjugate exponent of p.
Proof: According to (12) we can write We denote by g the function Let us choose J ≥ 2(2λ + k + 2).Applying Lemma 1, we get that the operator R k λ,1,J defined by can be extended to L p (0, π) as a bounded operator.Also, by proceeding as in the proof of Lemma 2 we can establish that the operator R k λ,1 −R k λ,1,J is a bounded operator from L p (0, π) into itself.Thus, we conclude that R k λ,1 can be extended to L p (0, π) as a bounded operator from L p (0, π) into itself.Proposition 5. Let λ > 0, 1 < p < ∞ and k ∈ N. We define on S λ+k the operator λ,2 can be extended to L p (0, π) as a bounded operator from L p (0, π) into itself.
Proof: Let f ∈ S λ+k .According to (16) for j = k, we have that By proceeding as in the proof of Proposition 4 we can see that R λ,2 can be extended to L p (0, π) as a bounded operator from L p (0, π) into itself.
We can write, for every f ∈ S λ , In the next proposition we prove the boundedness of the inverse of can be extended as a bounded operator from L p (0, π) into itself.
Proof: Fix J ≥ 4(λ + 1).By proceeding as in the proof of Proposition 4 we first take a function g J such that for n large enough.In order to establish the boundedness property for the operator T k λ , it is then sufficient to prove that the operator can be extended to L p (0, π) into itself.Moreover, Lemma 2 allows us to reduce the boundedness of T k λ,J to functions f ∈ S λ,k .Indeed, suppose that (25) Since f 1 ∈ S λ,k , by (25) and Lemma 2 we get that Finally, to prove the boundedness of the operator T k λ,J on S λ,k we apply Lemma 1.
Proof of Theorem 1:
Then, according to Propositions 5 and 6, since R m λ,1 g 2 = R m λ,1 g, we get On the other hand, Propositions 4 and 1 lead to Thus (26) is established.
Alternative definition of Sobolev spaces
As we said in the introduction the n-th order Riesz transform associated with the operator L λ is given by We observe that By using Propositions 4 and 5 we will prove that R k λ can be extended to L p (0, π) as a bounded operator, for every k ∈ N and 1 < p < ∞.As it was mentioned our proof is different from the one presented in [5]. where for certain constants c ℓ,k m and d ℓ,k m .Here, N ℓ,k depends on ℓ and k as it is indicated in the following table: Hence, for every θ ∈ (0, π), Let f ∈ S λ .From ( 27) and taking into account ( 12) and ( 18), it deduces that, for every k ∈ N, By using estimations (28) and (29) we then obtain that, for each θ ∈ (0, π), when k is even, and in the case that k is odd.Hence, according to Propositions 1 and 4, the boundedness of the operator R (k) λ will be established when we prove that, for every λ > 0 and j ∈ N, the operator is bounded from L p (0, π) into itself.We proceed by induction on j.Indeed, let λ > 0 and 1 < p < ∞.Note firstly that Suppose now that the operator C λ,j is bounded on L p (0, π), for every λ > 0 and j = 0, . . ., s, where s ∈ N. Let us see that the operator C λ,s+1 is bounded on L p (0, π), with λ > 0 fixed.By using Leibniz rule it follows that, for every f ∈ S λ+s+1 , where the operator T λ,s is defined by We observe that ( 12) and ( 18) lead to Also, a straightforward manipulation shows that From this estimation, and by using the induction hypothesis and Proposition 4 we deduce that the operator Hence, from (31) and Proposition 5, it follows that the operator T λ,s is bounded from L p (0, π) into itself.We can write (sin θ) where θ ∈ (0, π), q ∈ C ∞ (R), q(0) = 0 and q(π) = 0.By choosing δ > 0 such that q(θ) = 0 for θ ∈ Thus, the proof is finished.
Finally we can give the proof of Theorem 2, see Introduction, which establishes the relation between the spaces W p λ,m and W p λ,m .
Proof of Theorem 2: Assume that f ∈ W p λ,m .By Theorem 1 there exists g ∈ L p (0, π) such that f = L − m 2 λ g and g p is equivalent to f W p λ,m .Then, by Propositions 7 and 1, for every k = 0, 1, . . ., m, we have that
A maximal operator on the ultraspherical Sobolev spaces
Kinnunen [11] (see also [12] and [13]) proved that the Hardy-Littlewood maximal operator is bounded in the classical Sobolev space W p 1 (R n ) for every 1 < p < ∞.In this section we prove that the maximal operator associated with the Poisson integral for the operator L λ is bounded from W p λ,1 into itself, for every 1 < p < ∞.The maximal operator associated with {P λ r } 0≤r<1 is defined by This operator P λ * is bounded from L p (0, π) into itself, for every 1 < p < ∞ and from L 1 (0, π) into L 1,∞ (0, π) [18, Theorem 2.2].Now we can give the proof of Theorem 3.
Proof: Let f ∈ W p λ,1 and let {r j } ∞ j=1 be an enumeration of the rational numbers in (0, 1).Then, we have The sequence F k = max 1≤j≤k P λ rj (f ) k∈N is bounded in W p λ,1 .W p λ,1 , after renorming, is isometrically isomorphic to a closed subspace of L p (0, π) × L p (0, π).Then W p λ,1 , after renorming, is reflexive and the closed unit ball in W p λ,1 is sequentially compact in the weak topology.Hence, there exists a subsequence {F k l } l∈N of {F k } k∈N and a function g ∈ W p λ,1 such that F k l → g, as l → ∞, in the weak topology of W p λ,1 and g W p λ,1 ≤ C f W p λ,1 .Since F k l (θ) → P λ * (f )(θ), as l → ∞, for θ ∈ (0, π), and {F k l } l∈N is increasing, we conclude that P λ * (f ) = g and the proof of Theorem 3 is finished.The argument in the proof of the last theorem can be used in those cases in which formulas ( 12) and ( 18) produce identities like (19).This is the case of the maximal operator associated with partial sums for ultraspherical expansions.In fact the following theorem can be proved with these ideas and [9, Theorem F]. | 2018-12-06T16:53:44.438Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "919d7b8fdbee2a7ba8935c3a108ba27845fc72a5",
"oa_license": "CC0",
"oa_url": "https://ddd.uab.cat/pub/pubmat/02141493v54n1/02141493v54n1p221.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "2415cfb6c65cfb1183ddb9d038ef94fe001b16a3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
209516614 | pes2o/s2orc | v3-fos-license | Trimodal Cell Tracking In Vivo: Combining Iron- and Fluorine-Based Magnetic Resonance Imaging with Magnetic Particle Imaging to Monitor the Delivery of Mesenchymal Stem Cells and the Ensuing Inflammation
The therapeutic potential of mesenchymal stem cells (MSCs) is limited, as many cells undergo apoptosis following administration. In addition, the attraction of immune cells (predominately macrophages) to the site of implantation can lead to MSC rejection. We implemented a trimodal imaging technique to monitor the fate of transplanted MSCs and infiltrating macrophages in vivo. MSCs were labeled with an iron oxide nanoparticle (ferumoxytol) and then implanted within the hind limb muscle of 10 C57BI/6 mice. Controls received unlabeled MSCs (n = 5). A perfluorocarbon agent was administered intravenously for uptake by phagocytic macrophages in situ; 1 and 12 days later, the ferumoxytol-labeled MSCs were detected by proton (1H) magnetic resonance imaging (MRI) and magnetic particle imaging (MPI). Perfluorocarbon-labeled macrophages were detected by fluorine-19 (19F) MRI. 1H/19F MRI was acquired on a clinical scanner (3 T) using a dual-tuned surface coil and balanced steady-state free precession (bSSFP) sequence. The measured volume of signal loss and MPI signal declined over 12 days, which is consistent with the death and clearance of iron-labeled MSCs. 19F signal persisted over 12 days, suggesting the continuous infiltration of perfluorocarbon-labeled macrophages. Because MPI and 19F MRI signals are directly quantitative, we calculated estimates of the number of MSCs and macrophages present over time. The presence of MSCs and macrophages was validated with histology following the last imaging session. This is the first study to combine the use of iron- and fluorine-based MRI with MPI cell tracking.
INTRODUCTION
Mesenchymal stem cells (MSCs) have shown promising results as a cellular therapeutic. Many studies involving MSCs aim to restore damaged tissues, including bone, cartilage, tendon, adipose, and muscle tissue, through tissue regeneration (1,2). Moreover, several proposed therapies rely on the pleiotropic effects that MSCs impose on their local microenvironment through the release of extracellular vesicles, cytokines, and tropic factors (3)(4)(5). MSCs have been shown to exert antimicrobial effects, promote local vascularization and cell growth, and modulate inflammation (1,2,6). MSC survival and engraftment in vivo is critical in determining therapeutic outcomes. Unfortunately, many MSCs undergo apoptosis in the days following administration owing to the stresses of administration and subsequent lack of nutrients (7,8). Apoptotic stem cells release cytokines that attract immune cells (predominately macrophages) to the implant site. A high influx of these cells can ultimately trigger stem-cell rejection (8). The potential of MSC therapy is limited by MSC death and immune rejection; therefore, the development of a technique to quantitatively monitor MSC engraftment and ensuing inflammation over time would be invaluable for evaluating the course of therapy.
Many experimental studies of MSC engraftment have been conducted using histology, which provides detailed molecular and morphological information but is limited to the interrogation of a single time point and portion of tissue. Alternatively, cellular magnetic resonance imaging (MRI) has proven to be an effective technique for noninvasive and longitudinal cell tracking (8)(9)(10). To date, most cellular MRI involves tracking cells, which are labeled with superparamagnetic iron oxide (SPIO) nanoparticles. In proton ( 1 H) MRI images, SPIO-labeled cells appear as regions of signal void. In a uniform magnetic field, SPIOs alter the net local magnetization that nearby 1 H atoms experience and this leads to increased R2* relaxation rates of these 1 H atoms. These voids occupy a volume that is greater than the labeled cells, an effect referred to as the blooming artifact. This effect can lead to enhanced sensitivity of cell detection (11), but it poses challenges for accurate quantification of cell number. Measuring the volume of signal voids is one metric to estimate the number of cells present; however, this is not a direct relationship. One other limitation of SPIO-based cell tracking is lack of specificity in some tissues (9,12). There is some ambiguity when identifying these cells in vivo, as other regions in anatomic MRI appear dark (ie, the air-filled lungs).
Ultrasmall superparamagnetic iron oxides (USPIOs) are a subset of iron oxides used for MRI cell tracking. These nanoparticles are $30 nm in diameter and are coated in dextran, and thus, they are biocompatible and biodegradable. Ferumoxytol is one such USPIO that is FDA-approved for iron replacement therapy for the treatment of anemia in patients and may be used offlabel for iron-based MRI cell tracking (13,14). In this study we will use ferumoxytol for iron oxide cell labeling of MSC, as it is the most clinically applicable agent.
Fluorine-19 ( 19 F) MRI cell tracking is an alternative to iron oxide-based MRI. In this technique, cells are labeled with perfluorocarbon (PFC) nanoemulsions and detected with 19 F MRI in a hotspot image. Since there is little endogenous 19 F within biological tissues, these cells can be visualized with high specificity. The signal intensity of these images is directly linear to the number of 19 F atoms, which allows for the quantification of 19 F labeled cells in vivo (8,9,15). One limitation of 19 F-based cell tracking is that the sensitivity of detection is much less than iron oxide agents, requiring a minimum of 10 3 -10 5 -labeled cells per imaging voxel. PFC agents are clinically approved for cell tracking (16).
Magnetic particle imaging (MPI) is an emerging modality that directly detects SPIO nanoparticles. Similar to 19 F MRI, MPI produces positive contrast images of the distribution of labeled cells. MPI signal is linearly related to the quantity of iron oxide tracer that allows for accurate quantification of SPIO-labeled cells (13,17). It is not feasible to achieve this reliable specificity and quantification in SPIO-based MRI cell tracking, although SPIO-based MRI cell tracking may have superior detection sensitivity depending on tissue contrast. MPI has the potential to overcome the limitations of 19 F MRI (sensitivity) and iron-based MRI (specificity and quantification).
In this study, we use ferumoxytol for labeling and detecting MSCs with MPI, which has recently been shown to permit quantification of MSCs transplanted in a mouse model of osteoarthritis (13). This group also showed that MPI of ferumoxytollabeled cells was sensitive to changes in cell number in vivo over time, whereas the voids detected in 1 H MRI did not detect this change.
One strategy for detecting immune cells in vivo involves the intravenous administration of the labeling agent. This leads to uptake of the agent by phagocytic cells of the reticuloendothelial system (predominately macrophages) in vivo (8,10,18). MRI is typically performed 1 day after the administration of the cell labeling agent to permit for the clearance of intravascular agent and the accumulation of the label into cells.
Our aim is to combine the use of iron-based MRI, 19 F MRI, and MPI cellular imaging technologies to monitor and quantify the persistence of transplanted MSCs and infiltrating macrophages in vivo. These 3 modalities are complementary and provide additional information (specificity, sensitivity, and quantification of cell number) when integrated together. We explored the ability to label, detect, and quantify MSCs with ferumoxytol for detection in 1 H MRI and MPI images, and infiltrating macrophages with PFC for 19 F MRI detection.
Animal Model 15 C57B1/6 mice (Charles River, Canada, Pittsburgh) were obtained and cared for in accordance with the standards of the Canadian Council on Animal Care, under an approved protocol by the Animal Use Subcommittee of Western University's Council on Animal Care. 1 Â 10 6 UPSIO-labeled MSCs in 25-mL PBS were implanted to the hind limb muscle of 10 immune-competent C57B1/6 mice (day 0). A second cohort of control mice (n = 5) received 1 Â 10 6 unlabeled MSCS in 25-mL PBS. Immediately after MSC implantation, mice were administered 200-mL PFC agent (V-Sense, CelSense Inc.) intravenously via the tail vein to label phagocytic immune cells in situ. Injections were performed under 2% isoflurane in 100% oxygen anesthesia.
H/ 19 F MRI Acquisition
One and 12 days following MSC implantation, in vivo MRI images of all mice (n = 15) were acquired on a 3 T clinical scanner (Discovery MR750, General Electric) using a 4.31-Â 4.31-cm-diameter 1 H/ 19 F dual-tuned RF surface coil (Clinical MR Solutions, Wisconsin) as previously described (21). Both 1 H and 19 F images were acquired with 3-dimensional (3D) balanced steady-state free precession (bSSFP) sequences. Mice were anesthetized with 2% isoflurane in 100% oxygen during these scans.
19
F images were overlaid onto 1 H images (Osirix, Pixemo SARL) for anatomical reference. 19 F signal in the limb ipsilateral to the MSC implant was manually delineated and quantified relative to reference tubes of known 19 F content (3.33 Â 10 16 19 F/mL). Owing to the presence of phagocytic immune cells in the bone marrow (BM) and lymph nodes (LN), any 19 F signal in the contralateral limb was subtracted from the ipsilateral limb, to isolate the 19 F signal detected in the region of MSC implantation.
Evaluation of Ferumoxytol as an MPI Tracer
The particle relaxometer module (RELAX TM ) on the MOMENTUM TM system (Magnetic Insight Inc., Alameda, California) was used to characterize ferumoxytol as an MPI tracer. In this mode, the localizer gradient field is switched off and a negative magnetic field is turned on and then switched to a positive field (and vice versa). As a result, iron nanoparticles are driven from a negative magnetic saturation to positive (positive scan) and vice versa (negative scan). This measures the point spread function (PSF) of the nanoparticles and allows for measurements of properties such as fullwidth half-maximum (FWHM) (spatial resolution) and signal per gram of iron (sensitivity) (22,23). We used FWHM to define tracer resolution, according to the Houston criterion (24). The shift between the positive and negative PSF is a result of magnetic relaxation as described in some studies (23,25). We acquired PSFs for 30 mg and 5.5 mg (in 1 mL) ferumoxytol.
MPI Acquisitions
For 5 of the 10 mice that had ferumoxytol-labeled MSC implanted and that were imaged with MRI, full-body in vivo MPI images of mice were acquired. Image acquisition occurred on days 1, 5, and 12 following MSC implantation, on a MOMENTUM TM system (Magnetic Insight Inc.) using the 3D isotropic mode. In this mode, tomographic images were acquired using a 5.7 T/m gradient, 35 projections, 1 average, in an FOV of 12 Â 6 Â 6 cm, for a total scan time of $1 h per mouse. We have included day 5 images as an explorative, intermediate time point, to assess MPI detection of MSCs over time. Mice were anesthetized with 2% isoflurane in 100% oxygen during these scans. 3D isotropic images of MSC pellets were acquired using the same parameters on day 0.
MPI Calibration and Signal Quantification
Calibration lines were produced to determine the relationship between iron content in ferumoxytol (30 mg/mL) and MPI signal. To construct this line, 1 mL samples of ferumoxytol were scanned in the same mode as in vivo images (3D isotropic). Samples of iron content over 2 orders of magnitude were tested: 0.75 mg, 1.125 mg, 1.5 mg, 2.25 mg, 3 mg, 7.5 mg, 11.25 mg, 15 mg, 22.5 mg, and 30 mg iron.
All MPI images were displayed in full dynamic range and assessed for MPI signal corresponding to the samples (calibration samples, MSC pellets, or MSCs in vivo) with spatial reference to fiducial markers (Osirix, Pixemo SARL). To quantify the MPI signal in a specific region of interest (ROI), a 3D semi-automatic segmentation tool was used. Before delineating these ROIs, the window/level (W/L) was adjusted to each specific region, such that the full dynamic range of this region was displayed. Total MPI signal for the delineated volume was calculated by multiplying mean signal by volume. With samples of increasing iron content, both MPI signal and volume of the ROI increase. Total MPI signal was plotted against iron content to derive calibration lines, and this relationship was used to quantify iron content in MSC pellets and MSCs in vivo. All MPI images, including the calibration, MSC pellets, and in vivo images, were delineated and analyzed in the same way to ensure consistency.
Histological Analysis
Following the last imaging sessions on day 12, mice were euthanized by isoflurane overdose and then perfusion-fixed with 4% paraformaldehyde. The right limb muscle of mice was excised and paraffin-embedded. Embedded tissues were sectioned (5 mm in thickness) every 400 mm to ensure entire sampling of the tissue. These sections were stained with hematoxylin and eosin for general tissue morphology, PPB and nuclear fast red counterstain to identify the presence of ferumoxytol, or F4/80 immunohistochemical staining to identify macrophages. For F4/80 staining, sections underwent antigen retrieval in sodium citrate buffer, permeabilized using 0.4% Triton X-100 (Sigma-Aldrich, Oakville, Ontario, Canada), followed by overnight incubation in rat antimouse F4/80 primary antibody [1:100 dilution] (ab16911, Abcam). The next day, sections were incubated with biotinylated goat antirat IgG antibody [1:300 dilution] (BA-9401, Vector Laboratories) and then processed with ABC solution (PK4000, Vector Laboratories, Burlington, Ontario, Canada). Lastly, the slides were incubated in 3,3 0 -diaminobenzidine tetrahydrochloride (DAB) substrate solution (SK-4100, Vector Laboratories) and counterstained with hematoxylin. Histological images were acquired on the EVOS Imaging System (M7000, Thermo Fischer Scientific).
Statistical Analysis
Linear correlations were conducted between total MPI signal and iron content to determine Pearson's correlation coefficient. Student t-tests were used to evaluate temporal changes in signal void volumes and 19 F signal (day 1 vs 12). One-way ANOVA was used to determine statistical changes in MPI signal over time (days 1, 5, and 12). These analyses were conducted using Prism software (8.0.2, GraphPad Inc.), where P < .05 was considered statistically significant. Values are presented as mean 6 standard deviation.
Evaluation of Ferumoxytol as an MPI Tracer
We used the relaxometer mode on the Momentum TM scanner to measure the FWHM of 30 mg and 5.5 mg (in 1 mL PBS) ferumoxytol. As seen in Figure 1A, we measured a FWHM of 66.335 mT. For a 6.1 T/m gradient, the resolution of this ferumoxytol is 1.088 cm. The amplitude of the 30 mg PSF was $4.5 times the height of the 5.5-mg PSF.
Relationship Between Iron Content and MPI Signal
We acquired images of ferumoxytol samples with known iron content ( Figure 1B). These samples were separated by 2 cm on the MPI bed (5 samples/scan). There was a strong linear relationship (R 2 = 0.992, P < .001) between iron content and MPI signal (arbitrary units, A.U.) ( Figure 1C). This relationship holds for small quantities of iron that are relevant for our investigation ( Figure 1D). The equation of the line is: MPI Signal = 12.145 Â (Iron content) þ 2.9034. Using this relationship, iron content may be determined for a given MPI signal. We used these calibration lines to quantify iron content in ferumoxytol-labeled MSC pellets and ferumoxytol-labeled MSCs in vivo.
Assessment of MSC Labeling with Ferumoxytol
Uptake of ferumoxytol by MSCs was assessed using PPB staining ( Figure 2, A and B), and iron content in 1 Â 10 6 , 0.5 Â 10 6 , and 0.25 Â 10 6 cells was quantified using MPI ( Figure 2C). These 2 techniques indicated that MSCs were adequately labeled, with 2.430 6 0.211 pg iron/cell. We also showed that there is a relationship between MPI signal and the number of ferumoxytol-labeled MSCS in the pellet on day 0. For pellets containing 0.25 Â 10 6 , 0.5Â 10 6 , and 1 Â 10 6 cells, iron content was determined to be 0.876 mg, 1.227 mg, and 2.208 mg, respectively ( Figure 2D). The viability of these cells did not change with MSC labeling (97% viability before and after labeling). reduction of the voids. In some cases, these voids were difficult to distinguish based on other anatomical features that appear dark on 1 H MRI. 19 F MRI was overlaid to 1 H images and it revealed PFC uptake in the bone marrow (BM) and lymph nodes (LN), as well as in the muscle surrounding the MSC implant ( Figure 3B). This is due to the presence of phagocytic immune cells in these regions. 19 F images were first displayed according to their full dynamic range, then W/L to eliminate the background noise in the final images. This signal is unambiguous and directly related to the number of 19 F atoms present in the tissue, which allows for assessment of relative cell number.
Ferumoxytol-labeled MSCs were also detected with MPI 1, 5, and 12 days after MSC implantation ( Figure 3C; and see online Supplementary Figure 1). These MPI images were windowed to display the full dynamic range from a 1.5-mg ferumoxytol reference (not shown). In day 12 images, MPI signal generated from ferumoxytol-positive MSCs was diminished but more clearly visible with windowing. In MPI images, signal was also detected in the gut regions, presumably owing to the presence of iron in mouse feed. Mouse feed was imaged separately by MPI and had substantial iron content (shown in online Supplementary Figure 2).
In control mice that received unlabeled MSCs (n = 5), no regions of signal voids were detected in 1 H images ( Figure 3D). In these same mice, 19 F signal was detected in the LNs, BM, and in the muscle where the MSC implant occurred, similar to the mice which received ferumoxytol-labeled MSCs ( Figure 3E).
Quantification of Temporal Changes in Iron Voids, 19 F Signal, and MPI Signal
Over 12 days, the measured iron void volumes in 1 H images declined in all 10 mice, by 62% on average (P = .0003) ( Figure 4A). On day 1, the average void was 9.216 6 4.136 mm 3 and by day 12, 3.523 6 2.217 mm 3 . 19 F signal, detected from PFC-labeled immune cells, was detected in both limbs in all 15 mice on day 1 and 12 ( Figure 4B). On day 1, 19 F signal in the limb ipsilateral to the MSC implant was (1.866 6 0.5825) Â 10 19 spins and (2.522 6 2.101) Â 10 18 spins in the contralateral limb. Signal in the contralateral limb was only present in the bone marrow and LNs. The difference in 19 F signal between these limbs, representing signal solely from immune infiltration as a result of the MSC implantation, was (1.614 6 0.6604) Â 10 19 19 F spins . On day 12, (1.560 6 0.6535) Â 10 19 19 F spins were present in the ipsilateral limb and (8.518 6 7.227) Â 10 18 spins in the contralateral limb, resulting in a difference of (1.470 6 0.6565) Â 10 19 19 F spins. There was no significant difference (P = .148) in the 19 F signal resulting from MSC implantation (ie, the differences between limbs) when comparing signal on day 1 and day 12. This indicates the persistent infiltration of immune cells. There was no significant difference (P = .841) in 19 F signal between mice administered ferumoxytollabeled MSCs (1.530 6 0.746 Â 10 19 , n = 10) and unlabeled MSCs (1.577 6 0.487 Â 10 19 , n = 5).
Microscopy and Immunohistochemistry
MSCs were identified in hematoxylin and eosin sections among connective and muscular tissue ( Figure 5A). PPB staining verified the presence of ferumoxytol in these MSCs ( Figure 5B). F4/80 staining with DAB identified macrophages infiltrating the MSCs ( Figure 5C). These are directly adjacent sections.
DISCUSSION
We have detected and quantified the presence of MSCs and infiltrating macrophages over time using a unique combination of cellular imaging technologies. MSCs were labeled with ferumoxytol and detected in 1 H images as regions of negative signal (signal void) and in MPI images as regions of positive signal.
Macrophages that accumulate at the site of MSC implantation were labeled in vivo with a PFC agent, then detected with 19 F MRI. The direct quantification of MPI and 19 F MRI signal was used to estimate the relative number of MSCs and macrophages over time. This multimodality imaging approach allowed for the confirmation of MSC delivery, the measurement of MSC number over time (post implantation), and quantification of inflammation.
Ferumoxytol as a Dual 1 H MRI and MPI Cell Tracking Agent
In this study, we measured a decrease in the volume of signal loss generated by ferumoxytol-labeled MSCs in 1 H images and a decrease in MPI signal detected over 12 days following MSC implantation. This occurred in all mice and is consistent with several previous MRI cell tracking studies from our laboratory (8,15,26). Microscopy obtained on day 12 confirmed that PPB-positive cells were present in muscle tissue at the site of implantation. The decrease in the region of signal loss in MR images and the MPI signal is likely because of MSC apoptosis and clearance of these cells by the immune system. The use of MRI to measure signal void volume gave us an indication that there were fewer cells at day 12 than at day 1. However, this is not a direct measure of the number of MSCs present owing to the blooming MSCs are also detected in MPI images as bright spots (labeled Fe) (C). In these images, iron in the gut of the mice (ie, food) is also detected by MPI. The range of the MPI images is 0-0.14 arbitrary units, which is equivalent to 0-9.8 ng iron/mm 3 . Mice that received unlabeled MSCs (controls) had no signal voids present in 1 H MRI (D) and 19 F signal persisting (E), in response to the implant. The CLUT is displayed above each image. All images of the same type are windowed identically for comparison.
artifact and the nonlinear relationship between signal loss and cell number. With MPI we detected a decrease in positive signal produced by ferumoxytol-labeled MSCs over 12 days. This finding was in agreement with our MRI measurements; however, the MPI signal is directly related to iron concentration, which can be related back to MSC number. We can estimate the number of MSCs in vivo from the MPI data by comparing the measurements of MPI signal with the mean iron uptake per cell for MSC labeled in vitro. MPI data for cell samples showed 2.4 pg iron per MSC on average. This was used to estimate the number of MSCs in vivo over time (Figure 4, C and D). The use of ferumoxytol as a tracer for both iron-based 1 H MRI and MPI cell tracking is appealing, as it is a clinically translatable iron nanoparticle. However, other iron nanoparticles (ie, ferucarbotran, an SPIO) have superior MPI SNR and spatial resolution compared with ferumoxytol (13). In this study we reported an FWHM of 1.088 cm for 30 mg of ferumoxytol (30 mg iron/mL). We also reported a PSF for 5.5 mg ferumoxytol (in 1 mL PBS) to There was no significant difference in 19 F signal between mice with ferumoxytol-labeled (n = 10 mice) or unlabeled (n = 5 mice) MSCs. MPI signal and iron content (determined by MPI) declined over 12 days (67% reduction between days 1 and 12) in all 5 mice (C). The number of MSCs (estimated using MPI data) also declined over time in these same mice (D). easily compare FWHM and SNR of ferucarbotran (Vivotrax, 5.5 mg iron/mL) in the future. The ideal MPI nanoparticle is still under investigation, considering the effects of nanoparticle size and biological properties, that is, surface composition and cell labeling process. The Langevin model predicts a cubic improvement in spatial resolution with increasing nanoparticle size (23,25). USPIO (ferumoxytol) nanoparticles have a diameter of <50 nm, which is smaller and contain less iron than other available nanoparticles such as SPIOs (50-100 nm) and micron-sized superparamagnetic iron oxides ($1 mm) (27). In our study, we used protamine sulfate and heparin to increase uptake of ferumoxytol by MSCs. There is evidence (28) that the use of transfection agents (protamine) can reduce MPI detection; however, this effect is seen mainly at lower drive frequencies (0.4 kHz). The Momentum TM MPI scanner uses a 45 kHz alternating magnetic field to excite iron nanoparticles.
We showed that the formation of a calibration line was a robust technique to quantify iron content from measured MPI signal. In this process, the samples of iron (0.75-30 mg) were imaged using the same settings as the other in vivo scans (3D isotropic mode). This linear relationship persists at low iron concentrations (0.75-3 mg), which is useful in the quantification of ferumoxytol-labeled MSCs (which contained $2.4 mg in 1 Â 10 6 cells on day 0).
Imaging Inflammation With 19 F MRI
This is the first study to use PFC to indicate inflammation associated with iron-labeled stem cells and to track this over time. Following the intravenous administration of PFC, we detected prominent regions of 19 F signal in the limb muscle surrounding the MSC implant using 19 F MRI. This in vivo labeling technique is known to label resident phagocytic immune cells of the reticuloendothelial system (8,10). We detected a large number of 19 F atoms (on the order of 10 19 ) in the ipsilateral limb on both day 1 and 12. Microscopy obtained on day 12 confirmed that F4/80positive cells were present in muscle tissue at the site of implantation. This suggests that the number of PFC-positive macrophages remained constant over this time. We can get a rough estimate of macrophage cell number by comparing the value for total 19 F atoms with the mean 19 F uptake per cell for macrophages labeled in vitro. Our previous work has measured 2.12 Â 10 11 19 F spins per macrophage using NMR (21). Using this value, we would estimate that $7.44 Â 10 7 cells are present at the site of the MSC implantation on day 1 and 6.93 Â 10 7 cells on day 12. This is the first study to demonstrate the ability to image macrophage infiltration in vivo using 19 F on a clinical (3 T) MRI system. Compared with iron-based cell tracking, 19 F MRI has lower sensitivity and consequently, preclinical 19 F cell tracking has only been performed at relatively high magnetic field strengths (>3 T). The bSSFP imaging sequence and surface RF coil play a major role in enhancing sensitivity to enable detection and cell tracking of 19 F-positive cells at 3 T.
Potential Limitations
The MRI and MPI cell labeling agents used in this study (iron and PFC) may be diluted over time as cells proliferate, and thus, there is some ambiguity in the number of cells detected. Since MSCs are implanted in vivo to a suppressive environment that lacks nutrients, we do not expect that MSCs are proliferating substantially. We presume that the reduction ferumoxytol in MSCs, by detection of voids in 1 H MRI and by MPI, is predominately resulting from MSC death. The presence and detection of ferumoxytol do not reflect cell viability because this agent is retained within apoptotic MSCs. Phagocytic immune cells uptake these apoptotic MSCs and the USPIO nanoparticles for clearance in the liver. Thus, immune cells may be labeled with PFC and ferumoxytol, in a process called bystander labeling. Hitchens et al. (29) previously showed that if iron and 19 F agents are in the same cell, iron-mediated quenching of 19 F signal can occur. This may contribute to ambiguity when detecting and quantifying PFC-labeled macrophages that are involved with clearance of MSCs. However, we did not detect a difference in 19 F signal between mice, which received ferumoxytol-labeled MSCs or unlabeled MSCs. This indicates that while quenching of 19 F signal may be occurring in the presence of ferumoxytol, this effect does not significantly alter the quantification of PFC-labeled macrophages in this application.
We detected unwanted signal in the mouse gut in all 5 mice imaged with MPI ( Figure 3) owing to the presence of iron in mouse feed. This gut signal is also present in mice that do not have iron-labeled cells implanted (data not shown). This signal can complicate analysis of MPI using the automatic thresholding tool if the gut signal is much brighter than the region of interest. Because of this, the signal from ferumoxytol-labeled MSCs was manually delineated for 3 of the mice on day 12. This has negligible impact on the signal quantification, rather it is more timeconsuming for the user. This gut signal can also create problems if it is in close proximity to other target sources of iron. This did not impact the quantification of iron in this study, as the gut signal was distant enough from the MSC implant; however, this should be considered when designing future experiments.
The in vivo scan times for MPI (1 h) are considerably longer than MRI (9 minutes for 1 H scans and 18 minutes for 19 F scans). This much time under anesthesia is undesirable for cell tracking when images are collected at multiple time points. Although 2dimensional MPI scans of mice can be acquired within 3 minutes, these images (which appear as maximum intensity projections) do not have volumetric data for accurate quantification of iron present within a 3D geometry.
CONCLUSION
In this study, we have shown that iron-based 1 H MRI, 19 F MRI, and MPI can be used together to noninvasively monitor the fate of 2 cell populations in vivo (MSCs and macrophages). This is the first time that these 3 modalities are combined to monitor cell populations in vivo. We propose that these cellular imaging techniques could be used to monitor MSC engraftment over time and detect the infiltration of macrophages at transplant sites. This could enhance therapeutic monitoring to confirm appropriate MSC delivery, measure the number of MSCs present over time, and quantify immune infiltrate to identify MSC rejection. | 2020-01-01T14:48:46.120Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "26cab7dc2bc9f9beb112fe7a2749691f957eb44f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.18383/j.tom.2019.00020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26cab7dc2bc9f9beb112fe7a2749691f957eb44f",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
261876874 | pes2o/s2orc | v3-fos-license | Determining the parameters of the Minimal Supergravity Model from 2l + E_T^{miss} + (jets) final states at LHC
We analyse the events with two same-flavour, opposite-sign leptons + $E_T^{miss}$ + (jets) as expected in pp collisions at LHC within the framework of the minimal Supergravity Model. The objective is the determination of the parameters m_0 and m_{1/2} of this model (for a given value of $\tan\beta$). The signature $l^+ l^- + E_T^{miss}$ + (jets) selects the leptonic decays of $\tilde{\chi}^0_2$, $\tilde{\chi}^0_2 \to \tilde{\chi}^0_1 l^+ l^- $, $\tilde{\chi}^0_2 \to \tilde{l}_{L,R}^{\pm} l^{\mp} \to \tilde{\chi}^0_1 l^+ l^-$. We exploit the fact that the invariant dilepton mass distribution has a pronounced structure with a sharp edge at the kinematical endpoint even in such an inclusive final state over a significant part of parameter space. We determine the domain of parameter space where the edge is expected to be visible. We show that a measurement of this edge already constrains the model parameters essentially to three lines in the ($m_0, m_{1/2}$) parameter plane. We work out a strategy to discriminate between the three-body leptonic decays of $\tilde{\chi}^0_2$ and the decays into sleptons $\tilde{l}_{L,R}$. This procedure may make it possible to get information on SUSY particle masses already with low luminosity, L_{int}=10^3 pb^{-1}.
Introduction
If 'low-energy' supersymmetry (SUSY) is realised in Nature it should show up at the Large Hadron Collider (LHC). Strongly interacting particles as gluinos and squarks will be most likely the first SUSY particles to be seen at LHC. Gluinos of mass less than ∼ 2 TeV and squarks of mass less than ∼ 1.5 TeV [1][2][3] can be detected, covering in such a way the whole theoretically motivated parameter space. LHC is also a good laboratory for the search of electro-weakly interacting particles, e.g. sleptons [4,5]. In a recent paper [6] it was shown within the minimal supergravity (mSUGRA) [7] model that sleptons in the mass range of ∼ 100 to 400 GeV can be detected at LHC by investigating the signature two leptons + E miss T + no jets. However, this final state where direct production (Drell-Yan) of sleptons predominates requires high luminosity, L int = 10 5 pb −1 . Sleptons can be also produced indirectly in the decays of charginos and neutralinos, especially inχ 0 2 →l L,R l decays. The charginos and neutralinos, can in turn be produced directly or come from gluinos and/or squarks. This leads to final states with ≥ 2 leptons + E miss T + (jets). Actually, this indirect slepton production throughg,q decays has the largest cross-section in a sizable region of the parameter space accessible at LHC and could allow sleptons to be already revealed at L int = 10 3 pb −1 , i.e. simultaneously with strongly interacting sparticles. Thus, indirect production of sleptons can be more important for observing a slepton signal than direct one [8]. Moreover, in such a way the mass reach for sleptons search can be extended up to ml L ∼ 740 GeV. Having evidence for SUSY at LHC, one of the next tasks will be to find out the underlying model and to determine the model parameters. In this paper we work out a method to determine SUSY parameters and suggest a strategy for getting information on masses of SUSY particles by means of the signature l + l − + E miss T + (jets). Our study is made within the framework of the minimal supergravity model (mSUGRA) [7]. In this model all scalar particles (sfermions and Higgs bosons) have a common mass m 0 at M GUT ≈ 10 16 GeV. The gaugino masses M 1 , M 2 , M 3 (corresponding to U(1), SU (2), and SU (3), respectively) unify to a common gaugino mass m 1/2 , and all trilinear coupling parameters A ijk have the same value A 0 at M GUT . One also has unification of the electroweak and strong coupling parameters α i , i = 1, 2, 3 [9]. A further reduction of the parameters is given by invoking 'radiative symmetry breaking'. As a consequence, one has only the following input parameters: m 0 , m 1/2 , A 0 , tan β, sign(µ). Here tan β = v 2 v 1 , the ratio of the two vacuum expectation values of the two Higgs doublets, and µ is the Higgsino mass parameter. The whole SUSY particle spectrum can then be calculated by making use of renormalization group equations (RGE). This model is also incorporated in the Monte-Carlo generator ISASUSY [10] which is used in our analysis. This paper is aimed at determining the parameters m 0 and m 1/2 with fixed tanβ. Knowing these parameters, we can calculate the masses of the superpartners using RGE. For this purpose, we study in detail the leptonic decays ofχ 0 2 which have some useful features. Within the mSUGRA model,χ 0 2 has two-body decays,χ 0 2 →l ± L,R l ∓ , in the region m 0 < ∼ 0.5 · m 1/2 of the parameter space, whereas in the region m 0 > ∼ 0.5 · m 1/2 , m 1/2 < ∼ 200 GeV theχ 0 2 has three-body decays,χ 0 2 → l + l −χ0 1 . In both regions the invariant dilepton mass spectrum (M l + l − ) has a maximum M max l + l − , and therefore a pronounced structure with a sharp edge can be seen at the kinematical endpoint. This property was discussed first in ref. [11] in the case of three-body decays ofχ 0 2 and then in ref. [12] in the case of two-body decays. The generality of this feature, i.e. the observability of an edge in the M l + l − spectrum even in inclusive l ± l ∓ l +− and l + l − + E miss T final states in a large part of the parameter space was shown in [13]. We will show how much the parameters are constrained by a measurement of the M max l + l − value of the dilepton mass spectra. Moreover, we will discuss a method, based on the analysis of the M l + l − spectrum, to find out whether the observed edge is due to the two-body or three-body decays ofχ 0 2 .
Sparticle masses in mSUGRA
Within the Minimal Supersymmetric Standard Model (MSSM) the masses of the neutralinos are determined by the parameters M = m 1/2 (M Z ) , µ, and tanβ using M 1 ≃ 5 3 tan 2 θ W ≃ 0.5M (M 1 being the U(1) gaugino mass). In the following, we fix tanβ = 2 (we assume that tanβ could be known from previous experiments) and take A 0 = 0. In mSUGRA | µ | quite generally turns out to be | µ |> M, so that in this case In supergravity the slepton masses are given by [14]: Analogous equations exist for squarks [14]. Therefore, when the parameters m 0 , m 1/2 and tan β are known we can calculate all sparticle masses. A special case is the third generation of squarks and sleptons, where L−R mixing plays a crucial rôle.
3 Production and leptonic decay ofχ 0 2 Neutralinosχ 0 2 can be produced at the LHC through a Drell-Yan mechanism (direct production), in association with strongly interacting sparticles, or in the decay chain of gluinos and squarks (indirect production). Gluino and squark pair production processes are the dominant source ofχ 0 2 's because of large strong interaction cross-sections. The branching ratios of gluino and squark decays intoχ 0 2 are also sizable and are shown in fig.1 as a function of the model parameters m 0 and m 1/2 . One can see that the regions in (m 0 ,m 1/2 ) plane where the decaysg →χ 0 2 + X andq →χ 0 2 + X are open, are complementary. In the region m 0 > ∼ 1.47 · m 1/2 gluinos are lighter than squarks and can decay intoχ 0 2 , while squarks prefer to decay into gluinos. In the region m 0 < ∼ 1.47 · m 1/2 squarks are lighter than gluinos and can decay intoχ 0 2 , but gluinos decay into squarks. Hence the decaysg →χ 0 2 + X (q →χ 0 2 + X) andq →g →χ 0 2 + X (g →q →χ 0 2 + X) can coexist (see also fig.1). Fig.2(a,b) shows σ × Br for indirectχ 0 2 production from gluinos and squarks as a function of m 0 and m 1/2 .
2 Branching ratios ofχ 0 2 decays into leptons directly and via sleptons are shown in fig.3 as a function of m 0 and m 1/2 . The regions where these decays are kinematically allowed are complementary in the parameter plane, depending on whetherχ 0 2 is lighter or heavier thanl L,R . Thus, one can distinguish three domains in the (m 0 ,m 1/2 ) plane, which are (also see fig.4): In domain III, the decayχ 0 2 →l R l would also be kinematically allowed, but since the B − ino component ofχ 0 2 is very small, the coupling tol R l is also small. Therefore, the decayχ 0 2 →l R l is very much suppressed in the whole domain. In fig.5 we show the regions for σ × Br(χ 0 .002 pb in the (m 0 , m 1/2 ) plane from indirect and associatedχ 0 2 production followed by decays toχ 0 1 l + l − final states directly or via sleptons. One can see that there are regions in domains II and III where the mentioned decays coexist. Finally, in fig.6 we show σ × Br for indirect and associated production ofχ 0 2 decaying into leptons directly or via sleptons.
Determination of m 0 , m 1/2 and sparticle masses
In order to determine the parameters m 0 and m 1/2 we will exploit in the following the kinematical features of the two-and three-body leptonic decays ofχ 0 2 . As pointed out in [11], the decayχ 0 2 → l + l −χ0 1 (domain I) has the useful kinematical property that the invariant mass of the two leptons M l + l − has a maximum at whereas for the decaysχ 0 2 →l ± L,R l ∓ → l + l −χ0 1 (domain II and III) the maximum of M l + l − is given by [12]: Thus, the M l + l − distribution has a very characteristic shape with a sharp edge at the kinematical endpoint M max l + l − . As the main source ofχ 0 2 's is their indirect production in gluino and squark decays, the most suitable signature for selecting theχ 0 2 decays is provided by the topology with two same-flavour opposite-sign leptons accompanied by large missing transverse energy and usually accompanied by a high multiplicity of jets. In this paper we thus concentrate on the two same-f lavour, opposite-sign leptons + E miss T + (jets) channel, where the final state leptons are electrons and muons. 3
CMS detector simulation
The simulations are done at the particle level, with parametrised detector responses based on detailed detector simulations. These parametrisations are adequate for the level of detector properties we want to investigate, and are the only practical ones in view of the multiplicity and complexity of the final state signal and background channels investigated. The essential ingredients for the investigation of SUSY channels are the response to jets, E miss T , the lepton identification and isolation capabilities of the detector, and the capability to tag b-jets.
The CMS detector simulation program CMSJET 3.2 [15] is used. It incorporates the full electro-magnetic (ECAL) and hadronic (HCAL) calorimeter granularity, and includes main calorimeter system cracks in rapidity and azimuth. The energy resolutions for muons, electrons (photons), hadrons and jets are parametrised. Transverse and longitudinal shower profiles are also included through appropriate parametrisations. The main detector features incorporated in the Monte-Carlo description are: • Hadronic tracks, muons and electrons are measured up to | η |=2.4 • Deflection of charged particles due to the 4 T magnetic field is included.
• The resolution for the muon system is parametrised according to [16].
• The calorimetric coverage goes up to at | η |=5 for the HCAL and | η |=2.6 for the ECAL.
• ECAL energy resolution parametrized as: • HCAL energy resolution is parametrised according to [17] as a function of η; a typical hadron resolution is: • Energy resolution for very forward calorimeter (VFCAL), in the parallel plate chambers option: • Granularity of calorimeters: • E threshold on cells: • a modified UA1-jet finding algorithm with a cone size of ∆R=0.9 (for description see CMSJET 3.5 [15]) is used for jet reconstruction.
Observability of edges in invariant dilepton mass distributions
In this chapter we determine the regions in the (m 0 , m 1/2 ) parameter plane, where the characteristic edge in the M l + l − distribution can be observed in inclusive final states with two same-f lavour, opposite-sign leptons + E miss T + (jets) with different luminosities at LHC.
Standard Model background processes are generated with PYTHIA 5.7 [18]. We use CTEQ2L structure functions. The largest background is due to tt production, with both W 's decaying into leptons, or one of the leptons from a W decay and the other from the b-decay of the same t-quark. We also considered other SM backgrounds: W + jets, WW, WZ, bb and τ τ -pair production, with decays into electrons and muons. Chargino pair productionχ ± 1χ ∓ 1 is the largest SUSY background but gives a small contribution compared to the signal.
To observe an edge in the M l + l − distributions with the statistics provided by an integrated luminosity L int = 10 3 pb −1 in a significant part of the (m 0 , m 1/2 ) parameter plane, it is enough to require two hard isolated leptons (p Fig.7 shows the invariant mass spectra of the two leptons at various (m 0 , m 1/2 ) points from domains I, II and III, respectively. The observability of the "edge" varies from 77σ and signal to background ratio 31 at point (200,160) to 27σ and a signal to background ratio 2.3 at point (60,230). The appearance of the edges in the distributions is sufficiently pronounced already with L int = 10 3 pb −1 in a significant part of (m 0 , m 1/2 ) parameter plane, see fig.9. The edge position can be measured with a precision of ∼ 0.5 GeV.
With increasing m 0 and m 1/2 cross-sections are decreasing, therefore higher luminosity and harder cuts are needed. To achieve maximal reach in m 1/2 with L int = 10 4 pb −1 for points from domain III, a cut up to E miss T > 300 GeV is necessary to suppress the background sufficiently. For points with large m 0 (domain I) the transverse momentum p T of the leptons and E miss T are not very large, but there are more hard jets due to gluino and squark decays. Thus for these points we keep the same cuts for leptons and missing energy as before (p l 1,2 T > 15 GeV, E miss T > 100 GeV) and require in addition a jet multiplicity N jet ≥ 3, with energy E jet T > 100 GeV, in the rapidity range | η jet |< 3.5. To optimise the edge visibility we also apply an azimuthal angle cut, ∆φ(l + l − ) < 120 0 . For points from domain II, the jet multiplicity requirement is also helpful. Right sleptons are too light to provide large lepton p T and E miss T , and to use cuts on p l T and E miss T alone is not very advantageous. With L int = 10 5 pb −1 , to suppress the background at larger accessible m 0 , m 1/2 values, we have to require at least 2 or 3 jets, depending on the m 0 , m 1/2 region to be explored. Fig.8 shows invariant dilepton mass distributions at some (m 0 , m 1/2 ) points close to maximum reach with L int = 10 4 pb −1 and L int = 10 5 pb −1 respectively.
The regions of the (m 0 , m 1/2 ) parameter plane where an edge in the M l + l − spectra can be observed at different luminosities are shown in fig.9. In fig.10 we show separately the three domains where an edge due toχ 0 2 → llχ 0 1 ,l R l andl L l decays can be observed at L int = 10 3 pb −1 . One can notice a small overlapping region, where we expect to observe two edges, due toχ 0 2 → l + l −χ0 1 and toχ 0 2 →l ± R l ∓ → l + l −χ0 1 decays (case 1). With increasing luminosity and correspondingly higher statistics, this overlapping region increases, see figs.11 and 12. These plots show the same as fig.10, but for L int = 10 4 pb −1 and L int = 10 5 pb −1 , respectively. An additional region appears where two edges can be observed simultaneously, due toχ 0 2 →l ± R l ∓ → l + l −χ0 1 andχ 0 2 →l ± L l ∓ → l + l −χ0 1 decays (case 2). These regions (case 1 and 2) are due to the coexistence of differentχ 0 2 decay modes has been seen in fig.5. An example of a M l + l − distribution for case 1 is shown in fig.13. Therefore, to a given integrated luminosity at LHC (L int = 10 3 pb −1 to 10 5 pb −1 ) there corresponds a definite parameter region where the characteristic structure in the M l + l − distribution can be seen. This fact already gives a preliminary information about the parameters m 0 and m 1/2 . The observation of two edges would give even stronger constraints.
M max
l + l − analysis of (m 0 , m 1/2 ) parameter plane More specifically, at low luminosity, L int = 10 3 pb −1 , at the beginning of the LHC operation, the accessible values of M max l + l − lie in the following ranges (see figs. 10 and 14): It follows from the discussion above that a measurement of M max l + l − in the dilepton mass distribution, with a single edge, constrains the parameters in general to three lines in the 6 (m 0 , m 1/2 ) parameter plane. In case of M max l + l − > ∼ 90 GeV the constraint is stronger, there are just two possible lines. The most favourable case is when the measured M max l + l − value is large, M max l + l − > ∼ 180 GeV. Then one is left with a single line in the (m 0 , m 1/2 ) parameter plane. For the present study we have chosen, as an example of the general situation, the case of M max l + l − = 74 ± 1 GeV, with three lines corresponding to the domains I,II and III, respectively. The next step is to find out which line in the (m 0 , m 1/2 ) plane is the right one. To this purpose we have analysed points along these lines, given in tables 1-3.
The study is made for the low luminosity case, L int = 10 3 pb −1 . One should first notice that the observation of two edges at L int = 10 3 pb −1 would determine the (m 0 , m 1/2 ) point uniquely. This is due to the fact that the set of the edge position values in the M l + l − spectrum is different at each point of the parameter region (case 1), where two edges are expected to be observed at 10 3 pb −1 , see figs.10 and 14. With a luminosity L int = 10 4 pb −1 the positions of the two edges will fix two (m 0 , m 1/2 ) points, belonging to domains II and III and corresponding to case 1 and case 2, respectively. At high luminosity, L int = 10 5 pb −1 , the observation of two edges can give up to three possible (m 0 , m 1/2 ) points. One of them is from domain II, corresponding to case 1. The lines M max l + l − = const corresponding toχ 0 2 →l R l decays have the form of an ellipse and can cross the M max l + l − = const lines corresponding toχ 0 2 →l L l decays twice. Hence, two points with the same set of edge positions in the M l + l − spectrum can be found in domain III corresponding to case 2. A discrimination between these points is possible on basis of the event kinematics, and/or by an analysis of the total event rate and the relative number of events corresponding to the two peaks, (see figs.5,10-12).
Discrimination between differentχ 0 2 leptonic decays
For a definite value of the edge position M max l + l − one expects a different shape of the M l + l − distributions in two-and three-body decays (see fig.13, where the first peak is due to a two-body decay ofχ 0 2 and the second one due to a three-body decay). As we have seen from figs.7 and 8 the signal events contribute in the interval 0 < ∼ M l + l − < ∼ M max l + l − . In the following we only consider events in this mass region. The average value <M l + l − > of signal and background events of this mass region is shown in fig.15 as a function of m 0 . The errors are calculated by taking into account the statistical error, a systematic error in the measurement of the edge position, and a systematic error of 30 % for background uncertainty (the main background is tt). One clearly sees that <M l + l − > is significantly smaller in the case of a direct three-body decayχ 0 2 → llχ 0 1 . Thus the shape of the dilepton mass spectrum already allows one to decide whetherχ 0 2 decays into a slepton or not. In order to distinguish between domains II and III, we suggest to use the fact that in general the contour lines with the same M max l + l − for right and left sleptons have no overlap in m 1/2 in regions of parameter space which are accessible at a given luminosity, see figs.9 and 14 as an example. It means that the masses ofχ 0 1 's are different for these two lines, and hence E miss T is expected to be different. In fig.16 we show the <E miss T > values for events withχ 0 2 →l ± L,R l ∓ → l + l −χ0 1 decays after the cuts p l 1,2 T > 15 GeV and E miss T > 100 GeV, M l + l − < M max l + l − . The errors are calculated by taking into account the statistical error and a systematic error in the measurement of the edge position. As can be seen from fig.16, <E miss T > is larger in the case ofχ 0 2 →l ± L l ∓ → l + l −χ0 1 than in the case of χ 0 2 →l ± R l ∓ → l + l −χ0 1 as expected.
Event rate analysis
When the correct M max l + l − line is chosen, the last step is to find the point (m 0 , m 1/2 ) on this line. In general the cross section falls with increasing m 1/2 and m 0 . Thus, we study the event rate along the corresponding M max l + l − line. We first discuss the domain III, where the situation is simpler. For the event rate analysis at L int = 10 3 pb −1 , to reduce the uncertainties due to background, we use a harder cut on E miss T , E miss T > 130 GeV. The dependence of the expected event rate on m 0 is shown in fig.17. The errors are calculated by taking into account the statistical error and a systematic error of 30 % for background uncertainty. A systematic error due to the precision of the edge position measurement is also taken into account. From the observed event rate we can then determine m 0 with a good accuracy, δm 0 ≃ 4 GeV. The parameter m 1/2 is then given by the M max l + l − −line in the (m 0 , m 1/2 ) plane. The precision obtained in such a way is δm 1/2 ≃ 4 GeV.
In domain II, the event rate along a line of definite M max l + l − is first increasing and then decreasing with m 0 , see fig.18. This is mainly due to the change in the branching ratios (see fig.4). The dependence of the event rate is, however, such that m 0 cannot be determined uniquely. To a given event rate there correspond in general two m 0 values. The ambiguity can, however, be solved at high luminosity L int = 10 5 pb −1 , when two edges in the M l + l − distribution can be observed.
For domain I, the m 0 dependence of the event rate is shown in fig.19a, again for M max l + l − ≃ 74 ± 1 GeV. Notice the steep increase of the rate at m 0 ≃ 120 − 130 GeV. This is due to the fact that the decay channelχ 0 2 → llχ 0 1 is just opening in this region. As can be seen from the curve in fig.19a, there is an ambiguity in the determination of m 0 if the event rate is in the region 3700 < ∼ N EV < ∼ 5600 or 120 GeV < ∼ m 0 < ∼ 240 GeV. Here it helps if we look at the average number of jets <N jet > in the events under study. Fig.19b shows <N jet > as a function of m 0 . <N jet > is rising with m 0 as more jets are produced as the squarks become heavier. With the measured <N jet > we can resolve the ambiguity in the mentioned region 120 GeV < ∼ m 0 < ∼ 240 GeV and thus determine m 0 with δm 0 ≃ 7 − 3 GeV.
Conclusions
In this paper we have performed a detailed analysis of events with the signature l + l − + E miss T + (jets) to be expected in pp collisions at LHC. Our aim has been to determine the parameters m 0 and m 1/2 of the Minimal Supergravity Model and to get information on the mass spectrum of SUSY particles, assuming knowledge of tanβ from previous experiments. We have exploited the property of theχ 0 2 leptonic decaysχ 0 2 → l + l −χ0 1 , χ 0 2 →l L,R l → l + l −χ0 1 that the invariant mass of the two final leptons has a maximum, M max l + l − , clearly visible even in inclusive production. We have determined for different luminosities the regions in the (m 0 , m 1/2 ) parameter plane where one or two edges can be observed in the invariant dilepton mass distributions. These regions already give preliminary information about the model parameters. The appearance of the edges in the M l + l − distributions can be already seen with a luminosity L int = 10 3 pb −1 . Therefore we have concentrated on a low luminosity study. On the other hand, in case no such observation will be made at this luminosity, the corresponding parameter region can be excluded, and the same analysis can be done at higher luminosity.
We have shown that a measurement of the M max l + l − value constrains the parameters mainly to three lines in the (m 0 , m 1/2 ) parameter plane. The lines correspond to the decay modesχ 0 2 → l + l −χ0 1 ,l L l → l + l −χ0 1 ,l R l → l + l −χ0 1 respectively. We have worked out a method to discriminate the three-body from the two-bodyχ 0 2 decays. In the case of three-bodyχ 0 2 decays the parameter m 1/2 can be determined by the measured value of M max l + l − with a precision of ∼ 0.5 GeV. The parameter m 0 can then be determined from the observed event rate with a precision of 7-3 GeV. In the case of two-bodyχ 0 2 decays, a measurement of the missing transverse energy can allow one to distinguish between the two possible decaysχ 0 2 →l L l andχ 0 2 →l R l, but a more detailed study is needed. By an event rate analysis along the corresponding line in the (m 0 , m 1/2 ) plane we can determine m 0 and m 1/2 , δm 0 ∼ δm 1/2 ∼ 4 GeV.
Knowing m 0 , m 1/2 and tanβ, the masses of all SUSY particles (except for the 3rd generation of squarks and sleptons) are calculated by RGE. The precisions which can be achieved are ∼ 1 − 6 GeV.
In such a way it is possible to obtain information about SUSY particle masses already with low luminosity (L = 10 3 pb −1 ) even without having direct experimental evidence for their existence. This is especially important for sleptons in a parameter region where high luminosity would be necessary to detect them through direct production.
This study has been performed for tanβ = 2, but it is also possible for high values of tanβ. Most likely, for large tanβ > ∼ 30 a higher luminosity will be needed because of smaller branching ratios of theχ 0 2 leptonic decays. Let us mention some further interesting aspects of this work. Selecting the twobodyχ 0 2 leptonic decays by our method represents an indirect evidence for sleptons in the framework of mSUGRA. In this way it is possible to probe slepton masses up to ∼ 740 GeV well beyond what is possible in direct [5,6] searches. As it has been shown in this study, the edge in the invariant dilepton mass distributions is expected to appear at M max l + l − > ∼ 10 GeV, being quite generally a signal for a two-or three-body decay of some abundantly produced heavy object. Hence such an observation may serve as a first evidence for physics Beyond the Standard Model, and if observed with significant E miss T it would be a clear evidence for SUSY, more specifically forχ 0 2 production. Figure 1: Decay branching ratios as a function of m 0 and m 1/2 (in GeV) for: a)g →χ 0 2 +X, b)ũ L →χ 0 2 + X and c)g →ũ L + X, d)ũ L →g + X, for tan β = 2, A 0 = 0, µ < 0. Figure 2: Sigma times branching ratios as a function of m 0 and m 1/2 (in GeV) for indirect χ 0 2 production from gluinos (a) and squarks (b), for tan β = 2, A 0 = 0, µ < 0. 13 Figure 3: Branching ratios ofχ 0 2 decays: a)χ 0 2 →χ 0 1 l + l − , b)χ 0 2 →l ± L l ∓ and c)χ 0 2 →l ± R l ∓ as a function of m 0 and m 1/2 , for tan β = 2, A 0 = 0, µ < 0. Figure 4: Domains of the decaysχ 0 2 →χ 0 1 l + l − (dashed line),χ 0 2 →l ± L l ∓ (solid line) and χ 0 2 →l ± R l ∓ (dashed-dotted line) in the (m 0 , m 1/2 ) plane, corresponding to decay branching ratios in excess of 1% and 10% respectively, tan β = 2, A 0 = 0, µ < 0. . Also shown are the explorable domain in sparticle searches at LEP2 (300 pb −1 ) and the Tevatron (1 fb −1 ), theoretically and experimentally excluded regions [19]. 20 Figure 10: Domains where the observed edge in the M l + l − distribution is due to the decaysχ 0 2 →l ± L l ∓ →χ 0 1 l + l − (solid line),χ 0 2 →l ± R l ∓ →χ 0 1 l + l − (dashed-dotted line), χ 0 2 →χ 0 1 l + l − (dashed line), L int = 10 3 pb −1 . | 2018-05-31T09:28:10.147Z | 1997-11-17T00:00:00.000 | {
"year": 1997,
"sha1": "8e96cc789991728ced5431ce2db61934de22a4e4",
"oa_license": "CCBY",
"oa_url": "http://cds.cern.ch/record/687237/files/arXiv:hep-ph_9711357.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "65b8c634d023408e718f3567ddd538cd8f7393da",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13189971 | pes2o/s2orc | v3-fos-license | Serum Circulating microRNA Profiling for Identification of Potential Type 2 Diabetes and Obesity Biomarkers
Background and Aim MicroRNAs are small non-coding RNAs that play important regulatory roles in a variety of biological processes, including complex metabolic processes, such as energy and lipid metabolism, which have been studied in the context of diabetes and obesity. Some particular microRNAs have recently been demonstrated to abundantly and stably exist in serum and to be potentially disease-specific. The aim of this profiling study was to characterize the expression of miRNA in serum samples of obese, nonobese diabetic and obese diabetic individuals to determine whether miRNA expression was deregulated in these serum samples and to identify whether any observed deregulation was specific to either obesity or diabetes or obesity with diabetes. Patients and Methods Thirteen patients with type 2 diabetes, 20 obese patients, 16 obese patients with type 2 diabetes and 20 healthy controls were selected for this study. MiRNA PCR panels were employed to screen serum levels of 739 miRNAs in pooled samples from these four groups. We compared the levels of circulating miRNAs between serum pools of each group. Individual validation of the twelve microRNAs selected as promising biomarkers was carried out using RT-qPCR. Results Three serum microRNAs, miR-138, miR-15b and miR-376a, were found to have potential as predictive biomarkers in obesity. Use of miR-138 or miR-376a provides a powerful predictive tool for distinguishing obese patients from normal healthy controls, diabetic patients, and obese diabetic patients. In addition, the combination of miR-503 and miR-138 can distinguish diabetic from obese diabetic patients. Conclusion This study is the first to show a panel of serum miRNAs for obesity, and compare them with miRNAs identified in serum for diabetes and obesity with diabetes. Our results support the use of some miRNAs extracted from serum samples as potential predictive tools for obesity and type 2 diabetes.
Introduction
Over the past decade, the prevalence of obesity in the world has dramatically increased across all age groups, especially in developed countries [1]. Obesity is characterized by abnormal or excessive fat accumulation that is the result of a chronic imbalance between energy intake and energy expenditure [2,3]. It poses a substantial health risk, as obesity is linked to several common diseases, such as type 2 diabetes (DM2), cardiovascular disease, stroke, arthritis, and several types of cancer [4]. Type 2 diabetes is one of the most prevalent metabolic disorders. DM2 is characterized by increased systemic glucose levels and insulin resistance. Many factors are contributing to the growing obesity and DM2 but genetic factors are thought to have great significance in their development. The investigation of gene expression regulatory mechanisms during the evolution of obesity and DM2 will have potential applications in prevention, early diagnosis and treatment. Micro-RNAs (miRNAs) are small, non-coding, 21-23 nucleotide long RNAs that negatively regulate gene expression by pairing with the 3'-untranslated region (UTR) of their target mRNAs [5]. miRNAs are involved in highly regulated processes such as proliferation, differentiation, apoptosis and metabolic processes. Several studies have highlighted the significance of miRNAs in maintaining metabolic homeostasis, and thus regulation of these miRNAs could serve as potential therapeutics in metabolic disorders [6,7]. MicroRNAs have been found in tissues and also in serum and plasma, and other body fluids, in a stable form that is protected from endogenous RNase activity. These unique characteristics of circulating miRNAs may provide a useful biomarker for supplemental diagnosis. Studies by Zampetaki et al [8] showed decreased levels of 10 miRNAs in plasma of diabetic patients (miR-15a, miR-20b, miR-21, miR-24, miR-126, miR-191, miR-197, miR-223, miR-320 and miR-486). The authors suggest that the five most significant regulated miRNA are both necessary and sufficient to distinguish DM2 patients (70%) from control (92%). This study also revealed that a decrease in circulating miR-126 expression is associated with the risk for future development of diabetes. In serum samples of recently diagnosed DM2 patients compared to DM2-susceptible subjects with normal glucose tolerance, Kong L et al. [9] found seven miRNAs (miR-9, miR-29a, miR-30d, miR-34a, miR-124a, miR-146a and miR-375) which were shown to be elevated. All these miRNAs have been previously related to insulin regulation [10]. However few studies have investigated circulating miRNA expression as potential biomarkers for obesity. Recently, Ortega FJ et al. [11] have showed deregulated expression of plasma miRNAs in morbidly obese men. They suggest that five miRNas (miR-142-3p, miR-140-5p, miR-15a, miR-520c-3c and miR-423-5p) may be novel biomarkers for risk estimation and classification of morbidly obese patients. Other papers have studied adipocyte-specific mRNAs and miRNAs that have also been detected in exosomes and microvesicles isolated from rat serum [12][13][14].
The aim of this profiling study was to characterize the expression of miRNA in serum samples of obese, nonobese diabetic and obese diabetic individuals. We wanted to determine whether miRNA expression was deregulated in these serum samples and to identify whether any observed deregulation was specific to either obesity or diabetes or obesity with diabetes. This study is the first to show a panel serum miRNA for obesity, and compare them with miRNAs indentified in serum for diabetes and obesity with diabetes.
Study population, blood collection and preparation of serum for miRNA studies
This project was approved by the Ethics Committee of Clinic San Carlos Hospital and written informed consent was obtained from all volunteers. A total of 69 individuals were categorized accordingly into four groups: 1) healthy controls (CTR) (n=20; 50%Females, 50% Males ), 2) type 2 diabetes (T2D) (n=13; 46%Females, 53% Males), 3) Obese (Ob) (n=20; 85%Females, 15% Males ) and 4) obese with type 2 diabetes (Ob-T2D) (n=16; 40%Females, 60% Males) based on guidelines of the World Health Organization and International Diabetes Federation [1] (Table 1). Diabetes was considered to be present if the individual had a fasting glucose ≥ 126 mg/dl, or a pre-established diagnosis of diabetes. Individuals with BMI ≥ 30 kg/m 2 were classified as obese. Blood was collected in serum Vacutainer tubes with clot activator (Cat. 368968; Becton Dickinson) from volunteer donors at least 12 h following their most recent meal. Serum glucose was obtained by a glucose-oxidase method adapted to autoanalyzer (Hitachi 740, Boehringer Mannheim, Germany). Serum total cholesterol, trygliceride and high-density lipoprotein levels were measured using commercial kits (Boehringer Mannheim, Germany). Lowdensity lipoprotein was calculated by the Friedewald formula. For miRNA studies, one 10-mL serum tube with clot activator was collected. After allowing the content of serum tube to clot at room temperature for 1 h, serum was prepared by centrifugation at 1000 × g (2,300 rpm) for 15 min at 4°C in a KUBOTA 5900 centrifuge. The serum supernatant was slowly removed by using a plastic transfer pipette, leaving 0.5cm remaining to avoid disturbing the serum-clot interface, and stored at -80°C before use and were thawed on ice before use.
miRNA isolation and quality control of RNA
Total RNA was extracted from serum using a commercial column-based system following the manufacturer's instructions with the following modifications (Qiagen miRNeasy Mini Kit). Serum was thawed on ice and centrifuged at 3000 x g for 5 min at 4°C in microcentrifuge. An aliquot of 200 µl of serum per sample was transferred to a new microcentrifuge tube and 750 µl of a Qiazol mixture containing 1.25 µg/mL of MS2 bacteriophage RNA (Roche Applied Science) and spike-ins were added to the sample. A rinse step (500 µL Qiagen RPE buffer) was repeated 2X. Total RNA was eluted by adding 50 µL of DNAsa-RNase-free water to the membrane of the spin column and incubating for 1 min before centrifugation at 15,000xg for 1 min at room temperature. The RNA was stored at -80 °C. To asses the quality of RNA isolated from the cellfree serum, endogenous microRNA assays were carried. For testing the purification yield and absence of PCR inhibitors, miR-191 and miR-423-3p (miRNAs typically detected in serum) were tested on each of RNA from 200 μl of serum from the individual patients/controls (Table S1). The samples must be detected with a Cp < 37 to be included in the analysis. Samples that did not pass this criteria were omitted from any further analysis.To identify hemolyzed samples, the level of miR-451 (microRNA highly abundant in RBCs) was assessed in RNA samples. In the hemolysed samples the concentration of miR-451 was significantly higher (Table S2), as compared with non-hemolysed samples. The hemolysed samples were removed from research.
Reverse transcription and pre-screening
Two μl of RNA eluate was reverse transcribed in 10 μl reactions using the miRCURY LNA™ Universal RT cDNA synthesis kit (Exiqon). For the initial screening, the Human panel I and panel II containing 742 miRNAs (Exiqon) were applied to 4 pooled samples, one per group. Four ul of cDNA diluted 50 x (equivalent to 0.064 μl original serum sample), was assayed in 10 ul PCR reactions according to the protocol for mirCURY LNA™ Universal RT miRNA PCR. A no-template control (NTC) of water was purified with the samples and profiled like the samples. All amplifications were performed in a 7900HT Fast Real-Time PCR System (Applied Biosystems Inc.) in 384 well plates. The amplification curves were analyzed using the AppliedSDS2.4 software, both for determination of C T (by the second derivate method) and for melting curves analysis.
Pre-screening data analysis
All assays were inspected for distinct melting curves and the Tm was checked to be within known specifications for each particular assay. Furthermore any sample assay data point must be detected with 3 Cts less than the corresponding negative control assay data point, and with a Ct < 37 to be included in the data analysis. Data that did not pass these criteria were omitted from any further analysis. The default PCR procedure was used, and the analysis was performed by using RQ manager software (Applied Biosystems Inc.). ΔC T and ΔΔC T were calculated using the following mathematical formula: ΔC T =C T sample -C T Endogenous , ΔΔCT= ΔC T case -ΔC T control . We performed ΔC T normalization, using GeNorm methodology. The GeNorm analysis suggested using the 4 most stable (rankinvariant) miRNAs (miR-, miR-30c, miR-103, miR-191 and miR-423-3p). The geometric mean of all the selected internal controls was used as a normalizing factor. Differential expression was calculated using the 2 −ΔΔCt method, and miRNAs were considered differentially expressed beyond a threshold of a 5 fold change. Our aim was to identify potential biomarkers for obesity or diabetes type 2. Because of this, for validation by RT-PCR, we selected miRNAs differentially expressed just in one patient group respect other groups.
Analysis of individual miRNAs by Real-time quantitative PCR
We used miRCURY LNA™ microRNA PCR System (Exiqon) to assess the presence in serum of individual miRNAs. The cDNA was used as a template for the qPCR reaction using miRNA specific LNA™ PCR primer and Universal PCR primer. Gene expression levels were quantified using the 7900HT Fast Real-Time PCR System (Applied Biosystems Inc.) in 384-well plates. For the analysis by qRT-PCR in each extended sample, we first evaluated a suitable number of reference miRNAs, on the basis of increased expression stability and using the GeNorm methodology. The GeNorm analysis suggested including 4 reference miRNAs (miR-30C, miR-103, miR-191 and miR-423-3p). The selection and addition of various endogenous controls (reference miRNAs) for measures by qRT-PCR and the use of this geometric mean have been identified as among the most accurate and robust factors for normalization [15]. Thus, the geometric mean of all the selected internal controls was used as a normalizing factor. Relative expression was calculated using the comparative Ct method.
Statistical analysis
Statistical analysis was performed with SPSS software version 15.0 (SPSS, Inc., Chicago, USA). A P-value <0.05 was considered statistically significant. For each miRNA, a receiver operating characteristic (ROC) curve was generated. The area under curve (AUC) value and 95% confidence intervals (CI) were calculated to determine the specificity and sensitivity. To increase the diagnostic accuracy of combined changes in serum miRNA levels, multiple logistic regression analysis was carried out according to previously described methods [16].
Identification of potential miRNAs biomarkers in serum samples
To identify miRNAs in serum from diabetic and obese patients, which could act as a robust and reliable biomarker for these diseases we chose to use Exiqon QPCR panels .To detect the miRNAs most likely to be deregulated, we chose to pool samples within patients and controls. This pooling method had the advantage that we would only detect miRNAs that were expressed in the majority of subjects within the group. Therefore, this procedure reduced variation between individuals and enriched for miRNAs most likely to change between different groups. The four groups of serum samples comprised normal, healthy controls (CTR; n=20), diabetic patients (DM2; n=13), obese patients (OB; n=20) and diabetic with obesity (OB-DM2; n=16). Of the 739 miRNAs profiled by the Human panel I+II v2.M, (Exiqon), 244 miRNAs were detected in all serum pools with Ct values <35 ( Figure S1; Table S3). This is the first report where it is observed a differential miRNA expression in patients with OB, DM2 and OB-DM2 and compared with healthy individuals and among themselves. The more different miRNA expression patterns were in OB compared with CTR (Correlation Pearson´s coefficient 0.392), OB-DM2 compared with CTR (Correlation Pearson´s coefficient 0.5417) and OB compared with DM2 (Correlation Pearson´s coefficient 0.696). Interestingly, the differences in miRNA expression pattern in OB-DM2 compared with DM2 or OB were almost similar (Correlation Pearson´s coefficient 0.822 and 0.8818 respectively). Therefore, OB showed a miRNA expression pattern different than the others groups. After the relevant quality control steps (see Materials and Methods), twelve miRNAs were selected. These selected miRNAs were detected in all patient/control groups with the expression pattern of each miRNA being different for at least one patient group. These twelve potential miRNA biomarkers, were chosen for further investigation; miR-101, miR-138, miR-15b, miR-150, miR-25, miR-205, miR-27b, miR-376a, miR-432-5p, miR-500a, miR-503 and miR-942. These miRNAs were selected because they showed different expression level between at least one patient group and other groups. We considered a miRNA differentially expressed when value of 2 −ΔΔCt was at least 5.
Validation by RT-qPCR in individual serum samples
To validate the potential biomarkers identified from the prescreening results, serum levels of these miRNAs were measured by qRT-PCR assays. As described previously in Materials and Methods, the geometric mean of all the selected internal controls (miR-30C, miR-103, miR-191 and miR-423-3p) was used as a normalization control in serum. Serum miR-30C, miR-103, miR-191 and miR-423-3p levels were evaluated in all subjects (patients and controls). Our data demonstrated that no significant difference was observed in term of Ct values of these miRNAs between control and patient samples (Table S4). Serum levels of the 12 selected miRNAs were validated by qRT-PCR on the 69 serum samples ( Figure S2; Table S5). Our data indicated that all the 12 miRNAs were detected in serum but only miR-138 and miR-376a in obese serum were significantly diminished and serum miR-15b level significantly higher when compared to controls, diabetic and obese diabetic (all p-values<0.01) ( Figure 1A,1B, 1C respectively). Therefore, these three miRNAs could be potential biomarkers of obesity. None of the validated miRNAs were useful as biomarkers of diabetes type 2. Only miR-503 in diabetic serum was significantly lower when compare to controls ( Figure 1D). But in serum samples from obese patients, miR-503 also was significantly lower when compared to controls ( Figure 1D). Therefore miR-503 alone could not be used as a biomarker of diabetes type 2.
Evaluation of the biomarker potential of miR-503 and miR-138 in combination for distinguish DM2 from OB-DM2 patients
Our results show that single miRNA cannot distinguish between DM2 and OB-DM2 (Figure 1). miR-503 was only one that allows differentiation between DM2 or OB-DM2 from healthy controls ( Figure 1D). In multiple logistic regression analysis of miR-503 and miR-138 the resulting ROC curve shows reasonable separation between the two groups (AUC=0.7773; CI 0.5886-0.9661) ( Figure 3A). The multiple logistic regression analysis of miR-503 and miR-376a is also able to make the distinction (AUC=0.7530; CI 0.5596-0.9465) ( Figure 3B). However, this analysis of miR-503 and miR-15b cannot distinguish these conditions ( Figure 3C). It is important that the miRNA biomarkers can distinguish patients with obesity and diabetes from patients with obesity or healthy controls. Such a test would be useful to aid in the prediction of diabetes.
Discussion
Circulating miRNAs have been extensively investigated as novel and non-invasive diagnostic and prognostic markers. Most published studies have been focused on different types of cancer [17][18][19]. More recently, the role of miRNAs in diabetes and obesity has started to be studied. In this study, we performed an initial pre-screening with Exiqon panels followed Serum miRNA Profiling in Diabetes and Obesity PLOS ONE | www.plosone.org by qRT-PCR validation to screen human miRNAs for potential to act as biomarkers in diabetes and obesity. We identified three serum miRNAs, miR-138, miR-376a and miR-15b whose concentrations were significantly deregulated in the serum of OB patients compared with OB-DM2, DM2 and normal controls ( Figure 1A, B, C). ROC curves, revealed that the three miRNA panel has a promising ability to distinguish OB individuals from OB-DM2, DM2 and normal healthy controls (Figure 2). Use of miR-138 and miR-503 in combination showed good ability to efficiently distinguish DM2 from OB-DM2 patients ( Figure 3A). This combination of miRNAs can also be used to distinguish DM2 and OB-DM2 patients from normal controls. Therefore, these miRNAs show great potential as predictive test, which can be used both in the clinic and for screening the general population. We have also shown that miR-376a and miR-503 together can distinguish DM2 from OB-DM2 patients ( Figures 3B). This is the first time that serum miRNAs have been shown to act as useful predictive biomarkers in obesity and obesityrelated diabetes. Moreover, these miRNAs that we have found to be deregulated in obese patients have not yet been found in the circulating blood in association with this metabolic condition.
Deregulation of miRNAs is known to be involved in multiple processes including cell proliferation, apoptosis, cell-cycle regulation inflammation and invasion in various diseases. Among the three serum miRNAs identified in this study, some have already been reported to play important roles in obesity or diabetes. For example, miR-138 is down-regulated during adipogenic differentiation in human multi-potent MSCs [20]. miR-138 has been demonstrated to target the 3'UTR of EID-1, an interacting inhibitor of differentiation that can interact with SHP, an endogenous enhancer of adipogenic PPARγ2 [21].Therefore miR-138 appears to indirectly regulate PPARγ, an established transcription factor driving adipogenic gene expression in human MSCs [22]. The up-regulation of miR-15b is observed in the regenerating mouse pancreas as compared to embryonic day (e) 10.5 or e 16.5 developing mouse pancreas [23], which is associated with regulation of Nerogenin3 (NGN3), a bHLH transcription factor, marks pancreatic endocrine progenitor cells, as confirmed by lineage tracing studies [24] and is essential for expression of insulin in mouse liver cells, acinar cells, gut cells and ES cells [25,26]. ngn3 mutant mice develop diabetes and die at early postnatal stages [27]. Perhaps, deregulating of miR-15b in serum obese patients that we observed in our results could be implicated in a later development of type 2 diabetes. Prospective study would be necessary to validate this hypothesis. Additionally, Zhang Y et al [28], showed that the expression of miR-15b was also significantly elevated in the serum of fatty liver disease patients compared with healthy subjects. Non-alcoholic fatty liver disease (NAFLD) is a type of liver disease induced by long-term excessive energy intake, and it is strongly associated with type 2 diabetes, obesity and hyperlipidemia [29,30]. Upregulation of miR-15b was also observed in the high-fatinduced non-alcoholic fatty liver disease (NAFLD) SD rat model and in the palmitate-induced NAFLD L02 cell model [28]. Increased mir-15b expression in NAFLD models may lead to decreased cell proliferation and glucose consumption while inducing the storage of intracellular triglyceride, which are all hazards of NAFLD and obesity.
However, there are no reports about the role of miR-376a in obesity. In hepatocellular carcinoma cells HCC, miR-376a is significantly down-regulated and the elevated miRNA-376a repressed cell proliferation and induced apoptosis in HCC cells by targeting p85α and reduced PIK3R1 directly [31]. Apoptosis is a fundamental mechanism for maintaining homeostasis by removing dangerous and unnecessary cells. However, adipocytes are resistant to apoptosis because of high levels of Akt/protein kinase B and the anti-apoptotic factor Bcl-2. Adipocytes could be removed through apoptotic mechanisms in some pathological conditions such as obesity. The induction of apoptosis in adipocytes, by regulating miR-376a could be a possible method to reduce the adipocyte number.
Therefore, these three miRNA identified in this study could be implicated in adipogenesis, pancreatic regeneration, proliferation and apoptosis. All these processes are important in an environment of obesity, but our results show that these miRNA present significant differences between OB and healthy control and what is more important, these miRNAs allow differentiating between OB and OB-DM2 patients. Progression to overt diabetes in obese subjects is not always predictable. Thus, while some obese individuals progress to type 2 diabetes, others may only have mild metabolic abnormalities, suggesting that the absolute amount of fat stored may not be the most important factor determining the relationship between obesity and type 2 diabetes [32]. Indeed, other factors such as adipose tissue inflammation are viewed as key promoters of progression to type 2 diabetes [33]. Recently, miR-138 was reported to be down-regulated in esophageal squamous cell carcinoma (ESCC) and the markers of lipid rafts FLOT1, FLOT2 and caveolin-1 were identified as its targets, and NF-kappaB was activated [34]. Future studies will be necessary to explore if the down-regulation of miR-138 in serum samples of obese patients may also play an important role in inflammation process of the obesity.
In this study, a serum 3-miRNA-based expression profile that was able to accurately discern OB individuals from normal controls and OB-DM2 and DM2 patients had been identified. However, some of the highly expressed miRNAs were different from those found in previous studies. This inconsistency may be mainly due to differences in miRNA sources or to the difference between intracellular miRNAs and extracellular miRNAs. Other factors, such as study design, race, sample size or methodology may have also influenced the final results. These findings may have implications in the understanding of OB, establishing management strategies and estimating prognosis.
An important question arises about the potential impact of the pharmacological treatments used in diabetes, obesity and associated conditions on any of the identified miRNAs. Virtually no studies have addressed this issue in clinical setting. However, only Zampetaki et al. [8] showed similar levels of 13 miRNAs in plasma (miR-15a, miR-20b, miR-21, miR-24, miR-126, miR-191, miR-197, miR-223, miR-28-3p, miR-150, miR-29b, miR-320 and miR-486) of diabetic patients with or without drug treatment, mainly sulfonylureas. In fact, studies evaluating the role of drugs on miRNA regulation in metabolic disorders are recommended.
In conclusion: This study is the first to show a panel of serum miRNAs for obesity and compare them with miRNAs identified in serum for diabetes and obesity with diabetes. Moreover, our study goes to support the use of miR-15b, miR-138 and miR-376a extracted from serum samples as potential predictive tools for obesity and type 2 diabetes. | 2016-05-12T22:15:10.714Z | 2013-10-15T00:00:00.000 | {
"year": 2013,
"sha1": "b26c6fe76f1146b10871772214fd326d69adf9c0",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0077251&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b26c6fe76f1146b10871772214fd326d69adf9c0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18541138 | pes2o/s2orc | v3-fos-license | Solving the Coincidence Problem: Tracking Oscillating Energy
Recent cosmological observations strongly suggest that the universe is dominated by an unknown form of energy with negative pressure. Why is this dark energy density of order the critical density today? We propose that the dark energy has periodically dominated in the past so that its preponderance today is natural. We illustrate this paradigm with a model potential and show that its predictions are consistent with all observations.
Introduction.
A variety of evidence accumulated over the last several years points to the existence of an unknown, unclumped form of energy in the Universe. First was an apparent concordance [1] of different measurements: the age of the Universe; the Hubble constant; the baryon fraction in clusters; and the shape of the galactic power spectrum. Second came the stunning observations [2] of tens of distant Type Ia Supernovae, which found a distance-redshift relation in accord with a cosmological constant, but in strong disagreement with a matter dominated Universe. Finally, this past year has seen analyses [3] of the experiments measuring anisotropies in the CMB. Taken together, the CMB experiments plot out a rough shape for the power spectrum, one that is in accord with a flat Universe, but in disagreement with an open Universe. If we believe the estimates of matter density coming from observations of clusters [4], the only way to get a flat Universe, and hence account for the CMB measurements, is to have an unclumped form of energy density pervading the Universe.
Perhaps the simplest explanation of these data is that the unclumped form of energy density corresponds to a positive cosmological constant [5]. A non-zero but tiny constant vacuum energy density (cosmological constant) could conceivably be explained by some unknown string theory symmetry (that sets the vacuum energy density to zero) being broken by a small amount. However, to explain in this way a constant vacuum energy density of 2 × 10 −59 TeV 4 , which is not only small but is also just the right value that it is just beginning to dominate the energy density of the Universe now , would require an unbelievable coincidence. A different possibility is to give up the dream of finding a mechanism which would set the vacuum energy density to exactly zero and resort to believing that anthropic considerations select amongst > ∼ 10 100 string vacua to find one with a vacuum energy density sufficiently fine-tuned for life. Although this anthropic selection mechanism is logically consistent and even predicts a small but observable cosmological constant, one might think that nature would have found a more efficient mechanism to obtain a sufficiently small cosmological constant than such extreme brute force application of anthropic selection.
An alternative is to assume that the true vacuum en-ergy density is zero, and to work with the idea that the unknown, unclumped energy is due to a scalar field φ which has not yet reached its ground state. This idea, which is called dynamical lambda or quintessence, has received much attention [6] over the last several years. However, two problems still remain. First, the field's mass has to be extremely small, less than or of order the Hubble constant today ∼ 10 −33 eV, to ensure that it is still rolling to its vacuum configuration. This is in general difficult because scalar fields tend to acquire masses greater than or of order the scale of supersymmetry breaking suppressed by at most the Planck scale: m > ∼ F/m Pl > ∼ TeV 2 /m Pl ∼ 10 −3 eV. Although difficult, this could be achieved using pseudo-Nambu-Goldstone bosons [7]. Another more speculative way to achieve this would be to use the hypothetical symmetry (perhaps some sort of hidden supersymmetry) that ensures that the true vacuum energy density is zero to also protect the flat directions in scalar field space that would correspond to the very light scalar fields necessary for quintessence. The second, and perhaps even more serious problem is that almost all of these models require that we live in a special epoch today, when the quintessence is just starting to dominate the energy density of the Universe, and furthermore this specialness cannot even be justified by use of anthropic arguments.
In recent years a lot of progress has been made in understanding the behavior of quintessence fields. A broad class of solutions, called tracker solutions [10], has been discovered in which the final value of the quintessence energy density is insensitive to the initial conditions. For example, potentials like V = V 0 φ −n or V = V 0 exp(1/φ) can, for suitable choices of V 0 , catch up with the critical density late in the evolution of the Universe for a wide range of initial conditions and thus provide a natural setting for explaining the current acceleration of the Universe. However, the suitable choice of V 0 must be of the order of the critical energy density today, i.e., we are back to the problem of living at a special epoch today and not even being able to use anthropic arguments to justify this specialness.
In a subset of these tracking models, which we call the exact tracker solutions [8,9], the scalar field energy density is always related to the ambient energy density in the Universe: if the dominant component in the Universe is radiation, then the tracking field's energy density also falls off as a −4 , where a is the scale factor of the Universe. If the dominant component is matter, then the field's energy density scales as a −3 . This behavior arises from an exponential potential for φ (regardless of the value of V 0 ). Since the energy density in this field is always comparable to the background density, we are not living at a special epoch: any observer in the distant past or future would also see the tracking field's energy density. However, these tracking solutions run into two problems. First, if their energy density today truly is dominant, then it should also have been dominant at the time of Big Bang Nucleosynthesis (BBN). Constraints from observations of light element abundances preclude such an additional form of energy density at early times. Second, tracking models have the wrong equation of state at present since the tracking field behaves like matter, with zero pressure, instead of having the necessary negative pressure to accelerate the Universe.
In this letter we ask the question, what if the Universe has been accelerating periodically in the past? Then the fact that the Universe is accelerating today would not be surprising. It would merely reflect that the period is such that the Universe is accelerating today. Of course, if it turned out that to achieve a presently accelerating Universe the period had to be excessively fine-tuned, then this scenario would not be worth considering. However, note that the assumption that there is nothing special about the present time itself argues for the robustness of such a scenario. If the Universe does accelerate periodically, then there is no reason why it should not be accelerating today. If the Universe does accelerate periodically, then it is, in fact, reasonable to expect it to accelerate today.
To judge the merits of this scenario in a concrete manner, we adopt an ad-hoc potential. Though worked out for this specific potential, the predictions outlined here are the generic predictions of a periodically accelerating Universe. The model we adopt for study is a modification of the exponential potential (which leads to the exact tracker solution). The modification to the potential is a sinusoidal modulation, which induces the tracker field to oscillate about the ambient energy density. We show that such a potential can satisfy the the BBN constraints, can produce the right equation of state today and leads to testable features in the CMB and matter power spectra. We call this type of energy Tracking, Oscillating Energy, or TOE. The potential and the field evolution. Consider a scalar field φ with potential V (φ) = V 0 exp(−λφ √ 8πG). It is well-known [8] that such a potential leads to an attractor solution with Ω φ ≡ ρ φ /(ρ φ + ρ o ) = n/λ 2 where ρ o is the energy density in the other component of the Universe, which is assumed to scale as a −n . Thus, no matter what the initial conditions are for φ, it always evolves so that it tracks the rest of the density in the Universe. Now consider the potential . (1) This potential serves to modulate the tracking behavior. Figure 1 shows the resultant evolution of φ and its energy density for a particular set of the parameters A, ν.
(The normalization V 0 can be set to G −2 by shifting the initial value of φ.) Also shown is the tracking solution for this particular value of λ without the modulation. As expected, the sinusoidal term in the potential leads to oscillations about this tracking behavior. One can obtain analytic solutions for the dynamics of the potential in Eq. (1) during radiation (n = 4) or matter (n = 3) domination in the limit that A is small by perturbing about the corresponding exact tracker model which has φ √ 8πG = n λ ln a. The sine in Eq.
(1) provides a periodic forcing term with period ln a = 2πλ nν , while the natural period [8] of the damped oscillations about the exact tracker solution is ln a = 8πλ/ (6 − n)[3(3n − 2)λ 2 − 8n 2 ] with decay e-life ln a = 4/(6 − n). Although the above results are strictly valid only for small A, they account remarkably well for the behaviour shown in Figure 1. The forced period corresponds to the longer period of 5.4 units (n = 4) and somehwere between 5.4 units and 7.1 units (n = 3), while the natural period corresponds to the shorter period of 1.6 units (n = 4, 3) of the damped oscillations which are presumably excited by the non-linear effects that appear when A is not small. The energy density due to φ is relatively small at the time of BBN and relatively large today for the parameter set in Figure 1. It is, of course, clear that in order to get the right behavior at BBN and today, one has to pick the "correct" parameter sets. This involves a bit of fine-tuning which, as we argue below, is quite reasonable and natural. If one thinks of the parameter set as being randomly selected, then there is a finite probability that the Universe will be accelerating today and that the energy density of φ will be sub-dominant at BBN. What is this probability? If one selects A, ν and λ randomly, the chance of getting a Universe like ours is of the order of 1 in a 100. The exact number (for this potential) depends on how stringently we define "a Universe like ours". For example the tight constraints 0.4 < Ω φ < 0.8, w φ < −0.5, and (ρ φ /ρ 0 ) BBN < 0.1 give a probability of 1 in 450, while the relaxed constraints 0.1 < Ω φ < 0.9 and w φ < −0.25 and (ρ φ /ρ 0 ) BBN < 0.2 give a probability of 1 in 26. It is also very important to note that whatever the extent of fine-tuning, all of it is in dimensionless numbers. There are no energy scales in this scenario which are to be set by the present expansion rate of the Universe. Power Spectra. To compare with CMB and large scale structure observations, we compute the power spectra of the perturbations in a TOE model. Perturbations evolve differently in the presence of the scalar field energy density. For example, perturbations typically grow only when the Universe is matter dominated. Therefore, we expect a non-zero Ω φ to lead directly to power suppression on the scales inside the horizon, with increased suppression for larger Ω φ . The prediction for the CMB angular power spectrum is plotted in Figure 2. The primeval power spectrum is scale-invariant with adiabatic initial conditions. Also plotted for comparison is a model (ΛCDM) with cosmological constant Ω Λ = Ω φ today and the rest of the cosmological parameters also being the same. In further discussions we will contrast the results from the TOE model against this ΛCDM model. A noteworthy feature in Figure 2 is the increase in the heights of the first two peaks compared to that of the ΛCDM model. This stems from the fact that the gravitational potential decays more in the presence of the additional quintessence energy density. The decay of the potential at and after recombination (the so-called Integrated Sachs-Wolfe , or ISW, effect) leads [12] to enhanced power on scales l < ∼ 600, after which the potential becomes irrelevant. Note that the increase in the amplitude of both the first and second peak cannot be mimicked by adding more baryons, which raise the odd peaks but lower the even ones.
On smaller scales (l > ∼ 600), the TOE model has smaller anisotropies. Here there are two competing effects. First, the difference between the TOE and the ΛCDM models (around recombination when Λ is insignificant) is the presence of the extra quintessence energy density, which leads to the expansion rate in the two models being related as- Eq. 2 implies that all the relevant scales at recombination (which occurs at a r ≃ 10 −3 ) are smaller in the TOE model by a factor of about 1 − Ω φ (a r ). In particular, the damping scale is smaller, which increases in the power on small scales for the TOE model relative to the ΛCDM model. The second effect is the large scale normalization of the two models [13], and this second effect more than compensates for the first. COBE normalization is sensitive to scales around ℓ = 10 for which the differences in the two models with regard to the late-ISW effect is important. In particular, since Λ domination occurs very late, the ISW contribution around ℓ = 10 is much larger in the TOE model. This in turn implies that the normalization of the primeval power spectrum is smaller, a fact noticeable in the smaller amplitude of the photon power spectrum for the TOE model at small scales (and also the matter power spectrum, as we will soon see). One last effect that is worth pointing out concerns the difference in the peak positions in the two models (though unlike the peak amplitudes, it is probably not easily discerned). In particular, the TOE model has the acoustic features in its angular power spectrum shifted to smaller scales. This directly traces to the decrease in the angular diameter distance to the last scattering surface, for the TOE model. Of course, there is also the competing effect of the decrease in the size of the sound horizon at last scattering for the TOE model, which minimizes the effect. The prediction for the matter power spectrum is plotted in Figure 3. The difference in power at the largest scales is due to COBE normalization and the difference in the super-horizon growth factor (which is sensitive to the equation of state of the cosmic fluid) for the perturbation. As one moves to smaller scales, which entered the horizon well before the present, the differences in the evolution of the matter perturbation become more pronounced. The presence of the extra quintessence energy stunts the growth of perturbation once a mode enters the horizon. So, the earlier the mode enters the horizon, the larger the growth suppression relative to the ΛCDM model. In other words, smaller modes are monotonically more suppressed (something that may not be noticeable in the log plot) compared to the same modes in ΛCDM model. It might also be surprising that the φ domination around a = 10 −6 does not cause a more appreciable feature (i.e., suppression) in the power spectrum. The reason is that the smallest scales in Figure 3 have just entered the horizon at the time of φ domination (a ∼ 10 −6 ).
The normalization on the small scales is generally quoted in terms of σ 8 , the rms mass fluctuation within a 8 h −1 Mpc sphere. For the parameters in Figure 1, the TOE model has σ 8 = 0.4. This is several sigma smaller than the preferred value (see e.g. [11]) of ∼ 0.8, but could be rectified by a small blue-shift in the primordial spectrum [14]. Conclusions. We have constructed a model wherein the energy density tracks the dominant component in the Universe; satisfies the BBN constraints; and has the proper equation of state today. Further, this model makes definite predictions for large scale structure and for the CMB.
Perhaps the greatest drawback of this class of models is the arbitrariness of the potential. In particular we know of no theory which predicts a potential of the form given in Eq. (1). Nonetheless, we feel that the testable predictions of the model and the aesthetic quality it preserves that we do not live in a special epoch are of sufficient interest to warrant further study.
We thank Limin Wang for helpful discussions. The CMB spectra used in this work were generated by an amended version of CMBFAST [15]. This work was supported by the DOE and the NASA grant NAG 5-7092 at Fermilab. EDS acknowledges support by the KOSEF Interdisciplinary Research Program grant 1999-2-111-002-5 and the Brain Korea 21 Project. | 2017-04-13T12:34:03.626Z | 2000-02-17T00:00:00.000 | {
"year": 2000,
"sha1": "11f6f78c3d0fc8a3503e3aba34c1f51cd4c2e05a",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "d4e938a13eaee76cdadeed13845bc385a2486464",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
232323484 | pes2o/s2orc | v3-fos-license | Are the criteria always right? Assessment of hepatocellular carcinoma cases in living donor liver transplantation at a high-volume center
Background/aim With the increased experience in living donor liver transplantation (LDLT), it has been adopted for the treatment of hepatocellular carcinoma (HCC), with emerging discussions of criteria beyond tumor size and number. In contrast to deceased donor liver transplantation (DDLT), recipient selection for LDLT is not limited by organ allocation systems. We discuss herein the assessment, criteria, and experience with liver transplantation (LT) in HCC cases at a high-volume LDLT center.Material and methods: Between August 2006 and December 2017, 191 adult LT HCC recipients with at least one-year follow-up were retrospectively analyzed. Results In 191 patients, one-, three- and five-year survival rates were 87.2%, 81.6%, and 76.2%, respectively, including early postoperative mortality. In 174 patients with long-term follow-up, one-, three- and five-year disease-free survival rates were 91.6%, 87.7%, and 84.4%, respectively. When multivariate analysis was utilized, tumor differentiation was the only factor which statistically affected survival (p = 0.025). Conclusion LDLT allows us to push the limits forward and the question “Are the criteria always right?” is always on the table. We can conclude that, with the advantage of LDLT, every HCC patient deserves a case-by-case basis discussion for LT under scientific literature support. In borderline cases, tumor biopsy might help determine the decision for LT.
1.Introduction
Hepatocellular carcinoma (HCC) is the most common primary liver cancer and remains an ongoing problem, with incidence increasing worldwide. It is also well known that HCC develops mainly in chronically diseased livers, with low median survival rates if no treatment is received [1][2][3]. There are various modalities for curative and palliative treatment. Surgical resection and interventional radiological treatment are the options with successful outcomes in limited cases due to underlying chronic liver disease. During the last decades, liver transplantation (LT) became a radical treatment for HCC in that it can simultaneously treat intrahepatic metastasis as well as multicentric carcinogenesis and diseased liver [4][5][6].
During the last two decades, Milan Criteria (MC) has been implemented worldwide for LT in cases of HCC, and many organ sharing programs now use MC for organ allocation. Starting with the University of California, San Francisco (UCSF) criteria [7], over the past decade, the search for new criteria and discussions of LT algorithms for HCC became a hot topic in the field. With increased experience in living donor transplantation (LDLT), LDLT was adopted in the setting of HCC treatment with new discussions about criteria beyond the size and number of tumors. In contrast to deceased donor liver transplantation (DDLT), recipient selection for LDLT is not limited by organ allocation systems.
In this study, we discuss the criteria for LT in HCC cases, sharing our experience and assessment of our HCC cases as a high volume LDLT center.
2.2.Immunosuppression
The protocol for immunosuppressive therapy was triple maintenance immunosuppressive therapy at the beginning, with a lower dose consisting of prednisone, tacrolimus (Prograf, Astellas USA, Deerfield, IL), and mycophenolate mofetil (Cell-Cept, Roche Laboratories, Nutley, NJ). Prednisolone was stopped in all cases with a taper at one month after transplant, and MMF was stopped in most cases at three months after transplant. Most patients with low tacrolimus levels (4-6 ng/mL) were followed postoperatively according to their clinical findings. In some HCC recurrence cases, mTor inhibitor was started according to the decision made together with the oncologist and hepatologist.
2.3.Follow-up after LT
A thoraco-abdominal CT or/and MRI were performed every 3 months for the first year of follow-up, every 6 months between 1 and 3 years, and annually after 3 years. AFP and clinical examination were performed every month during the first 6 months and every 2 months between 6 months and 1 year of follow-up, every 3 months between 1 and 3 years, and every 6 months between 3 and 5 years. After 5 years, CT or MRI with AFP test was performed annually or if clinically indicated. A biopsy of all suspicious lesions was performed for recurrence, and we attempted to treat all recurrent lesions with surgical resection or interventional radiological treatment after the determination of recurrence.
Early postoperative mortality (first 6 months) occurred due to sepsis, primary nonfunction (PNF), multiorgan failure (MOF), cardiac arrest, and neurological complications in 17 (8.9%) cases. These cases were included in the analysis. Of the 17 cases, 9 were within in the pMC and 8 were beyond the pMC (4 of them were beyond the USCF criteria). In addition, 8 were transplanted from a deceased donor, and 9 were transplanted from a living donor.
When the data were analyzed according to total tumor numbers (1, 2, 3, 4-9 and more than 10 tumors), there was not a significant difference between the five groups (p = 0.54) ( Figure 1A). There were 13 cases with long-term follow-up with more than 10 tumors; 3 deaths occurred due to HCC recurrence in a total 6 cases with recurrence (Table 3). We also instituted a cut-off for total tumor size of 8 cm, as this was the most supported limit [9] in the literature, and there were not significant differences between total tumor size over and below 8 cm (p = 0.19) ( Table 2). With our evaluation system, we had a chance to transplant only seven patients with the largest tumor size more than 8 cm. Statistically, our case number was not large enough to make a conclusion, but 5 lived for more than 5 years and 3 are still living without HCC recurrence more than 5 years posttransplant (Table 4). We did not find any significant differences in our patient population with AFP levels higher and lower than 200ng/mL (p = 0.89) ( Table 2). There were only 16 cases followed longterm with AFP ≥400 ng/mL, and two deaths occurred due to HCC recurrence in a total of 6 cases with recurrence ( Table 5). In our HCC patients, MELD scores of the recipients did not affect survival rates by subgroup (p = 0.72). According to our univariate analysis, poor tumor differentiation (p = 0.0001) ( Figure 1B), microvascular invasion (p = 0.004)( Table 2) and recipient age ≥65 (p = 0.016) ( Table 2) affected patient survival. Comparably with all our LT patients, older HCC (age ≥65) recipient survival rates at 1, 3 and 5 years (72.0%, 64.7% and 58.8%, respectively) were significantly lower than those for younger recipients (age <65) survival rates (89.5%, 84.4% and 79.1%, respectively) ( Table 2). When Cox regression multivariate analysis was performed, including all the factors, tumor differentiation was the only factor, which statistically affected survival in our patients (p = 0.025) ( Table 6). Although our case number was not large enough to reach statistical significance, largest tumor size greater than 8 cm increased the overall HCC recurrence rate (57.1%, n:4/7) and decreased the long-term overall patient survival rate (71.4%, n:5/7) (Table 4). In our HCC patients with recurrence, 1-, 3-and 5-year survival rates were 81.3%, 54.7%, and 25.0%, respectively ( Figure 2A). Of the 50 beyond UCSF patients with longterm follow-up for well-differentiated tumors (n = 10), 1-, 3-and 5-year survival rates were all 90% and, for moderately differentiated tumors (n = 32), 1-, 3-and 5-year survival rates were 84.1%, 76.7%, and 67.3%, respectively. In this group, for poorly differentiated tumors (n = 8), survival rates dropped to 46.0% at 1 year and 31.3% at 2 years ( Figure 2B).
Discussion
It is agreed in the literature that one of the most important steps for successful outcomes after LT in HCC is patient selection, as is true in many other areas of medicine [10]. With the improvements in LT, Mazzaferro et al. reported MC for LT in HCC cases in 1996. In this report, survival rates after LT for HCC cases were similar to the survival rates after LT for other diseases [11]. Improved survival rates in patients beyond MC on explant histopathology started the discussion of extending patient selection criteria for LT, as the aforementioned criteria were considered too restrictive. Starting with UCSF [7], many centers began reporting excellent survival rates with their own new criteria [12][13][14][15][16][17][18][19][20][21][22][23][24][25]. LDLT allows many centers to develop center-specific expanded criteria with acceptable results without consideration of allocation system limitations, and LDLT in the setting of HCC has been adopted worldwide over the past decade [9,10,26]. Sugawara et al. utilized a 5-5 rule (up to five nodules with a maximum diameter of [29]. With the advantage of LDLT, at many centers, especially in Asia, patients with advanced HCC are considered on a case-by-case basis, and risks factors for recurrence, chance of survival, and strong wishes of the patient, donor, and her/his family are considered [30]. However, the selection criteria are still a matter of debate. Under the influence of ongoing discussions in the literature, starting with the first case we evaluated, all chronic liver disease patients with HCC were considered case-by-case in our multidisciplinary selection meeting. With the advantage of LDLT, we did not limit our discussions around any criteria. Beyond the tumor size and number, if the patients did not have findings of extrahepatic or macrovascular invasion, tumor thrombosis, lymphatic node or extrahepatic metastasis findings, they were evaluated as an LT candidate. In contrast to DDLT, the indications for LDLT for HCC were decided based on the balance between risks to the living donor and benefits to the recipient [4]. We considered all findings, which provide hints about the biological behavior of the tumor. Tumor growth rate in time, AFP level, tumor margin findings at computed tomography (CT) or magnetic resonance imaging (MRI) views, 18F-labeled fluoro-2deoxyglucose positron emission tomography (18F-FDG PET) findings, response to other previous treatments, histopathological differentiation (if there was a biopsy) and age of the patients were the parameters we interpreted before making the decision. Only one or two parameters supporting poor biological behavior were not enough to make the decision against LT. The more the morphological limits of selection criteria expand, the more the recurrence rates after LT increase [4]. If most of the findings supported poor biological behavior, alternative and bridge treatment options were suggested instead of LT. In addition, all the possibilities and risks were discussed at length with the recipient, donor candidate and family members. With this evaluation, our survival rates are comparable with the literature and are acceptable.
According to our analysis, which is also supported widely by the literature, tumor differentiation is the most important factor affecting survival rates. However, biopsy for patients with a decompensated cirrhotic liver is not always possible due to retention of ascites and risk of bleeding as well as tumor dissemination. It could help us to know the tumor differentiation before the decision, but a biopsy cannot accurately diagnose the most advanced degree of differentiation due to the heterogeneity of HCC tumors [4]. Preoperative tumor biopsy and grading analysis have huge variability in specificity and sensitivity, which poses limitations for the prognostic value of biopsy [31]. There is a seeding risk of 3%, false negative rate of 30%, and only 12.5% sensitivity for the identification of microvascular invasion [32,33]. In contrast, the Toronto group reported that the preoperative biopsy is 90% effective in excluding patients with a poorly differentiated lesion. Their recurrence rate related to the preoperative biopsy was 1.9%, which was consistent with previous studies. The Toronto group also reported the biopsy results as one of the main criteria [20]. Dubay et al. reported the usefulness of pretransplant liver biopsy and proposed that LT for advanced moderate to well-differentiated HCC can be performed safely with excellent 5-year overall and disease-free survival in the absence of size and tumor number restrictions [34]. In our previous short review of our experience correlated to a meeting, we concluded that, considering tumor differentiation, a preoperative biopsy can help select the best HCC patients for transplant even beyond the UCSF criteria with reasonable outcomes [35], but we did not perform routine biopsies in our patients due to the concerns in the literature. Centers' experiences differ in regard to preoperative tumor biopsy.
Therefore, noninvasive methods, including tumor markers, CT findings and PET are desirable for predicting the tumor biology. In addition, bridging therapies (transarterial chemoembolization -TACE, transarterial radioembolization -TARE and external beam radiation) prior to LT help control local disease progression [36]. Moreover, imaging modalities have dramatically improved in the last two decades. Some radiologic imaging findings, such as large tumor diameter, tumor margins, the presence of tumor capsule, the distance from tumor to liver capsule, tumor internal homogeneity, contrast enhancement patterns on postcontrast dynamic and hepatobiliary phase images, and diffusion restriction on diffusion weighted images can predict microvascular invasion (MVI). In addition, some clue imaging findings, especially beak and bulging signs, may predict MVI, allowing the clinician to biopsy [37]. We routinely used these noninvasive methods during our evaluation. In some borderline cases, we performed a biopsy for the final decision.
Many earlier studies have shown the importance of vascular invasion as a prognostic marker. Pommergaard HC et al. reported that patients without vascular invasion, regardless of size and number of nodules, had a survival comparable to MC and up-to-7 criteria [32]. On the basis of the idea that incorporating tumor biological markers and predicting microvascular invasion and poor differentiation can exclude patients with a high risk of recurrence before LT, some expanded criteria that include such markers have recently been proposed [20,38,39]. Our data also support these reports in the literature.
Piardi T. et al reported that tumor size more than 8 cm, AFP level and histologic grading were only independent significant prognostic factors in their LT patients for HCC [31]. With our evaluation system looking at many factors related to poor outcome, we did a limited number of cases with the largest tumor more than 8 cm. In our limited number of cases with the largest tumor size over 8 cm, our data support this literature, with the exception of AFP level. Our experience showed that with the increase in the largest tumor size, other additional poor prognostic factors were seen more often. In addition, when we reviewed our data case by case, an important number of our patients with more than 10 tumors (n = 13) and the largest tumor size greater than 7 cm (n = 11) who underwent LT and were followed long term had the opportunity to live more than 5 years instead of losing their lives much earlier (Table 3 and 4).
Pre-transplant AFP is independently associated with post-transplant HCC recurrence survival, suggesting that elevated levels reflect increased tumor aggressiveness that is present even with recurrent disease [40][41]. Elevated AFP is an important prognostic marker associated with the presence of microvascular invasion and poor tumor differentiation [42]. Hong et al. reported that serum AFP levels and 18 F-FDG PET positivity represent [43], in place of morphological factors, new biological criteria that can improve the risk stratification of tumor recurrence more than the MC for LDLT recipients with HCC [43][44]. Although AFP is the most widely used tumor marker for HCC, only half of all tumors secrete this protein. Thus, AFP may not be an optimal indicator of risk [2]. According to our data, AFP could not be the only marker associated with the poor outcomes. When we looked case by case at our 16 HCC patients with AFP levels higher than 400 mg/ mL, remarkably, 14 of them were still alive years after LT (Table 5).
Many new prognostic biomarkers were studied in the literature to establish the outcomes of HCC patients undergoing LT. The most examined biomarker is the serum AFP level. In addition, an association has been found between increased HCC recurrence and high serum levels of Des-gamma -carboxy prothorombin, E-cadherin, beta-catenin and high HCC expression of GPC-3, but additional research is necessary to establish the prognostic role these biomarkers [45].
Most of the findings in literature supported that poor biological behavior is the most important impact factor for the outcome. Tumor differentiation is the well-establihed one, which is also supported widely by the literature findings. According to our analysis, tumor differentiation is the only factor that impacts the outcome, which can be a conflict with some of the literature findings such as AFP level, tumor size, 18F-FDG PET, other bimarkers etc. With our evaluation system, we might had a chance to transplant limited number of patients to analyze some of these factors, which might also impact the outcome. This is one of the limitations in our analysis to make a better conclusion. However, we strongly consider a caseby-case basis evaluation for the LT in HCC cases with a multidisciplinary team.
Some studies have suggested that immunosuppression with the mammalian targets of rapamycin (mTOR) inhibitor, such as everolimus or sirolimus, may reduce the risk of HCC recurrence after LT [46]. We followed most of our cases with low tacrolimus levels and switched tacrolimus to mTOR inhibitors in limited recurrence cases. We always tried to treat the recurrent lesions with surgical or interventional radiological treatment. Our experience is limited with mTOR inhibitors for statistical analysis.
Although overall outcomes are better after LDLT for treatment of HCC, some previous studies had reported that LDLT HCC recipients had worse recurrence compared to DDLT HCC recipients. This was postulated to be due to the lack of ability to test the tumor biology during the waitlist time, which is shorter for LDLT recipients [21,30,47]. Hypotheses include fast-tracking patients to LT, growth factor and cytokines released during the rapid regeneration of a partial graft, surgery technique (may be no-touch total hepatectomy technique). Since LD grafts are not public resources, it is already accepted in the LT community that the recurrence risk of HCC, survival benefit of the recipient, and wishes of the donor should be considered for LDLT candidate selection [30]. In addition, experience with successful LDLT after intensive multidisciplinary treatment for HCC patients with portal vein tumor thrombus, which has been accepted as a contraindication even in the LDLT setting, has been reported in the literature [48][49][50].
Our endorsement for LDLT would only make sense if we can provide a safe donation environment with a low complication profile. Many centers from Turkey reported their living liver donation complication rates [51][52][53][54]. We previously reported complications and outcomes of our 890 living donor hepatectomy cases [8]. No death is reported in our series. Greater experience and knowledge of LDLT will allow reduced donor morbidity.
Both the European Association for Study of the Liver (EASL) and American Association for Study of the Liver Disease (AASLD) recently revised guidelines to continue to recommend MC as the benchmark for selection and argue that there is a lack of uniform consensus and limitations inherent to retrospective analysis [55][56]. Literature and guidelines strongly encourage centers moving away from MC to carefully collect prospective data on outcomes using new criteria for selecting patients [57].
Conclusion
We know that criteria for any medical treatment is important and is usually mandatory. Our data statistically showed that USCF criteria seems more reasonable according to MC. The literature supports LDLT and allows us to push the limits forward. The question "Are the criteria always right?" is always on the table. According to our experience and with the support of the literature, we can conclude that, with the advantage of LDLT, all HCC patients deserve a case-by-case basis discussion for LT under the scientific literature support. In borderline cases, tumor biopsy might help to make a decision about whether to perform LT. | 2021-03-24T06:16:52.710Z | 2021-03-23T00:00:00.000 | {
"year": 2021,
"sha1": "9ef6d33f318e153e4797346825a6d2dcac85052f",
"oa_license": null,
"oa_url": "https://doi.org/10.3906/sag-2101-51",
"oa_status": "BRONZE",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae3ac0cdc40c76e687510234c4e99536c80bd613",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118903864 | pes2o/s2orc | v3-fos-license | Disorder Operators, Quantum Doubles, and Haag Duality in 1+1 Dimensions
We demonstrate the role of Drinfeld's quantum double D(G) as a spontaneously broken hidden symmetry in a large class of massive quantum field theories in 1+1 dimensions with compact symmetry group G. Our considerations are independent of exact integrability. The main technical ingredient is an assumption concerning the statistical independence of fields localized in spacelike separated regions which should hold in all reasonable massive models. The present note is an abridged version of hep-th/9606175.
INTRODUCTION AND PREREQUISITES
Since the notion of the 'quantum double' was coined by Drinfel'd in his famous ICM lecture [8] there have been several attempts aimed at a clarification of its relevance to two dimensional quantum field theory. The quantum double appears implicitly in the work [3] on orbifold constructions in conformal field theory, where conformal quantum field theories (CQFTs) are considered whose operators are fixpoints under the action of a symmetry group on another CQFT. Whereas the authors emphasize that 'the fusion algebra of the holomorphic G-orbifold theory naturally combines both the representation and class algebra of the group G' the relevance of the double is fully recognized only in [4]. The quantum double also appears in the context of integrable quantum field theories, e.g. [1], as well as in certain lattice models (e.g. [18]). Common to these works is the role of disorder operators or 'twist fields' which are 'local with respect to A up to the action of an element g ∈ G' [3].
In this note, which is a compressed version of [12], we will use the methods of algebraic quantum field theory [10,11] to demonstrate the role of the quantum double as a hidden symmetry in every quantum field theory with group symmetry in 1 + 1 dimensions fulfilling (besides the usual assumptions like locality) only two technical assumptions (Haag duality and split property, see below) but independent of conformal covariance or exact integrability. As in [5] we will consider a quantum field theory to be specified by a net of von Neumann algebras, i.e. a map O → F (O) which assigns to any bounded region in 1 + 1 dimensional Minkowski space a von Neumann algebra (i.e. an algebra of bounded operators closed under hermitian conjugation and weak limits) on the common Hilbert space H such that isotony holds: (1.1) The quasilocal algebra F = O∈K F (O) · , K being the set of all double cones (intersections of forward and backward lightcones), is assumed to be irreducible: F ′ = C1. † In order to simplify the exposition we restrict ourselves in this note to pure Bose fields (for the case of general Bose-Fermi commutation relations see [12]): Poincaré covariance is implemented by assuming the existence of a (strongly continuous) unitary representation on H of the Poincaré group P such that (1. 3) The spectrum of the generators of the translations (momenta) is required to be contained in the closed forward lightcone and the existence of a unique vacuum vector Ω invariant under P is assumed. Covariance under the conformal group, however, is not required.
Our last postulate (for the moment) concerns the inner symmetries of the theory. There shall be a compact group G, represented in a strongly continuous fashion by unitary operators on H leaving invariant the vacuum such that the automorphisms α g (F ) = Ad U(g)(F ) of B(H) respect the local structure: (1.4) The action may be assumed faithful, i.e. α g = id ∀g = e. (Compactness of G need in fact not be postulated, as it is known to follow from the split property which will be introduced later. For the sake of simplicity we assume in this note that the group G commutes with the Poincaré group, see [12] and [13,Appendix] for further discussion.) The observables are now defined as the gauge invariant operators: This framework was the starting point for the investigations in [5] where in particular properties of the observable net (1.5) and its representations on the sectors in H, i.e. the G-invariant subspaces, were studied. One important notion examined in [5] was that of duality designating a maximality property in the sense that the local algebras cannot be enlarged (on the same Hilbert space) without violating spacelike commutativity. The postulate of duality for the fields consists in strengthening the locality postulate (1.2) to (1.7) A sector H 1 is called simple if the group G acts on it via multiplication with a character Clearly the vacuum sector is simple. Furthermore it has been shown [5,Theorem 6.1] that the irreducible representations of the observables on the charge sectors in H are strongly locally equivalent to the vacuum representation in the sense that for any representation π(A) = A ↾ H π and any O ∈ K The fundamental facts (1.7) and (1.9), which have come to be called Haag duality and the DHR criterion respectively, were taken as starting points in [6] where a more ambitious approach to the theory of superselection sectors was advocated and developed to a large extent. The basic idea was that the physical content of any quantum field theory should reside in the observables and their vacuum representation whereas all other physically relevant representations as well as unobservable charged fields interpolating between those and the vacuum sector should be constructed from the observable data. The vacuum representation was postulated to satisfy (1.7), while (1.9) was chosen as a selection criterion for a class of interesting representations. It may be considered as one of the triumphs of the algebraic approach that it has finally been possible to prove [7, and references given there] the existence of a compact group G 'describing' the structure of the DHR sectors and of an essentially unique net of field algebras acted upon by G and generating the charged sectors from the vacuum.
In 1+1 dimensions part of the analysis breaks down due to the topological pecularity that the spacelike complement of a bounded (connected) region consists of two connected components as a consequence of which the permutation group governing the statistics is replaced by the braid group. The algebraic formalism of [6] was adapted to this situation in [9], see also [13]. It is still not known by which structure the compact group appearing in the higher dimensional situation has to be replaced if a completely general solution to this question exists at all. Even though in 1 + 1 dimensions one can not conclude the existence of a field net with group symmetry it appears interesting to study nets of observables arising as fixpoint nets ('orbifold theories'). This is the aim of the research to be reported here which in particular leads to a complete understanding of another peculiarity in 1 + 1 dimensions as will be discussed in the last section.
SPLIT PROPERTY, DISORDER OPERATORS, AND NONLOCAL FIELD EXTENSIONS
We begin by introducing some notation.
RR which graphically looks as in Figure 1. In analogy to ideas in statistical mechanics we introduce the notion of a family of disorder operators which consists, for any O ∈ K and any g ∈ G, of two unitary operators U O L (g) and Figure 1. Wedges associated to a double cone A disorder operator thus interpolates between the action of an unbroken global symmetry on one wedge and the trivial action on a wedge properly contained in the spacelike complement of the first one.
In general it is not obvious that disorder operators exist. Therefore we introduce as another axiom the split property for wedges which formalizes a strong form of statistical independence of spacelike separated regions. A net of field algebras has this property if for every double cone O the von Neumann algebra . It is known that free massive scalar and Dirac fields satisfy this property and it seems reasonable to expect this to be the case in every well behaved massive theory. For a discussion of related properties and for further references we refer to the detailed review [17].
Using essentially the same methods as in [2] one can show that the split property implies the existence of disorder operators U O L (g), U O R (g) for all O ∈ K, g ∈ G. Besides the above defining equations these operators have the following additional properties: We thus have, for each double cone O, a 'factorization' of the global symmetry group into two commuting representations which are localized along half lines. An immediate consequence of (2.2) and the representation property is which expresses covariance of the disorder operators under global gauge transformations. Arguing that in view of (2.1) the operators U O L (g), U O R (g) are associated to the double cone O we define the following extension of the field algebras: In this diagram one can go from the right column to the left by restricting to the invariant elements under G. Furthermore, one can showF(O) to be isomorphic to the crossed product of F (O) by the automorphism group α O g = Ad U O L (g). In order to simplify the exposition from now on we assume the group G to be finite. Most of our results remain valid for compact groups, see [12]. In the next section this structure will be analyzed quite explicitly.
SPONTANEOUSLY BROKEN QUANTUM DOUBLE SYMMETRY
We have already remarked that the algebraF (O) is isomorphic to a crossed product which is equivalent to the existence of a G-gradation. This implies that everyF has a unique representation of the form Given an arbitrary function f ∈ C(G) on the group there is thus an action γ f : In particular, for the delta-functions δ g (h) = δ g,h we obtain the projections γ g := γ δg . Due to (2.3) we have We are now prepared to exhibit the action of the quantum double D(G) on the extended algebras. Let C(G) be the algebra of (complex valued) functions on the finite group G and consider the adjoint action of G on C(G) according to α g : f → f • Ad(g −1 ). The quantum double D(G) is defined as the crossed product D(G) = C(G) ⋊ α G of C(G) by this action. In terms of generators, D(G) is the * -algebra generated by unitary and selfadjoint, respectively, elements U g , V h , g, h ∈ G with the relations and the identification U e = g V g = 1. The action of D(G) is now defined for the basis and for D(G) by linear extension. One easily verifies γ ab (x) = γ a • γ b (x) and γ 1 (x) = x, whereas the well-known Hopf algebra maps on D(G) [4] lead to (3.6) ε(V (g)U(h)) = δ g,e ⇒ γ a (1) = ε(a)1.
(We have used the standard notation ∆(a) = a (1) ⊗ a (2) for the coproduct.) This proves that γ : D(G) × M → M defines an action of D(G) on the local algebrasF(O). As this action is compatible with the local structure it extends to a unique action on the quasilocal algebraF .
In the case of an abelian group G this can be reformulated in terms of commuting actions of G and the dual groupĜ, the total symmetry group thus being G ×Ĝ. It is clear that theĜ-part of the symmetry is spontaneously broken in the sense that there are no unitary operators on H implementing this action. The same holds, of course, in the non-abelian case where the symmetry to be unbroken would mean that there exist operators U(a) ∀a ∈ D(G) such that U(a) x = γ a (1) (x) U(a (2) ). (3.7) Despite the partial breakdown of the symmetry one can prove that the spectrum of the action of D(G) is complete in the sense that for every finite dimensional representation where D 1 , D 2 are the matrices of the respective representations and The operators ψ i can be chosen as isometries fulfilling the relations define a unital *-endomorphism ofF and its left inverse [6], respectively. The relative locality of A andF implies the restriction of ρ to A to be localized in O in the sense that ρ(A) = A ∀A ∈ A(O ′ ). If the net A satisfied Haag duality we could conclude by standard arguments [6] that ρ maps Despite the fact that Haag duality does not hold for A, this is still true, however, as follows from the D(G)-invariance of ρ(x) for x ∈ A. We can thus use the formalism of [6] to define the statistics operator which can be shown to be ρ)), the statistics dimension d ρ turns out to coincide with the dimension of the representation D of D(G) whereas the statistics phase ω ρ is given by where the central unitary element X ∈ D(G) is just the (inverse of the) 'ribbon element' [15] of the modular Hopf algebra D(G). Finally, defining the monodromy operators ε M (ρ 1 , ρ 2 ) = ε(ρ 1 , ρ 2 )ε(ρ 2 , ρ 1 ) we can compute the statistics characters [14]: where I = Rσ(R) is again well known in the context of modular Hopf algebras. We have thus established, for a special class of models, a correspondence between the notions of algebraic QFT and those of [15]. Yet, the framework is not exactly as in [6,9]. The point is that one can prove [13] that our assumptions, in particular the split property for wedges, preclude the existence of nontrivial DHR sectors. That this no-go theorem does not apply in the present situation is due to the fact, to be discussed in the rest of this note, that Haag duality does not hold for the fixpoint net A.
HAAG DUALITY
We will now comment on a less well known two-dimensional pecularity, namely the fact that the step [5] from (1.6) to (1.7) fails in 1+1 dimensions. This means that one cannot conclude from (twisted) duality of the fields that duality holds for the observables in simple sectors, which in fact is violated. The origin of this phenomenon is easily understood. Let O ∈ K be a double cone. One can then construct gauge invariant operators in F (O ′ ) which are obviously contained in A(O) ′ but not in A(O ′ ). This is seen remarking that the latter algebra, belonging to a disconnected region, is defined to be generated by the observable algebras associated to the left and right spacelike complements of O, respectively. This algebra does not contain gauge invariant operators constructed using fields localized in both components. The weaker property of wedge duality remains true, however. Let H 1 be a simple sector. Then: (4.1) Defining now the dual net by it is easy to verify that Haag duality holds for A d . One would, however, like to know which additional operators are obtained in this way. Using the above methods we can actually compute the dual net in terms of A(O) and the disorder operators. One can show that the net defined in (2.6) is local and leaves the sectors in H invariant so that it constitutes a local extension of A in each sector. Using the formula one can in fact prove it to coincide with the dual net:Â(O) ↾ H 1 = A d (O) for every simple sector H 1 . This is reminiscent of the analysis in [16] where nets of observables (in at least 2+1 dimensions) which arise as fixpoints under a group of inner symmetries from a field theory were shown to violate Haag duality whenever the symmetry is spontaneously broken in the sense that the vacuum is not invariant under the whole group. Again the observables fulfill a weaker property (essential duality) which allows to construct a maximal local extension satisfying Haag duality. This dual net was shown in [16] to be just the fixpoint net of the field net under the unbroken part of the gauge group. The analogy to the situation studied above is obvious, for here =F G are the invariants under the unbroken part G ⊂ D(G) of the quantum double.
ACKNOWLEDGEMENTS
I am grateful to the organizers of the Cargèse summer school for the opportunity to present this seminar. Thanks are also due to K.-H. Rehren for many useful discussions and to the Studienstiftung des deutschen Volkes for financial support. | 2019-04-14T02:59:14.905Z | 1996-11-18T00:00:00.000 | {
"year": 1996,
"sha1": "323a57bc3f56f29c964dc677af7b7ef379141c81",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9611131.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "323a57bc3f56f29c964dc677af7b7ef379141c81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250029039 | pes2o/s2orc | v3-fos-license | Advanced two-stage cascade configurations for energy-efficient –80 °C refrigeration
In response to the COVID-19 pandemic, some vaccines have been developed requiring ultralow-temperature refrigeration, and the number of these freezers has been increased worldwide. Ultralow-temperature refrigeration operates with a significant temperature lift and, hence, a massive decrease in energy performance. Therefore, cascade cycles based on two vapor compression single-stage cycles are traditionally used for these temperatures. This paper proposes the combination of six different cycles (single-stage with and without internal heat exchanger, vapor injection, liquid injection, and parallel compression with and without economizer) in two-stage cascades to analyze the operational and energy performance in ultralow-temperature freezers. All this leads to 42 different configurations in which the intermediate cascade temperature is optimized to maximize the coefficient of performance. Ultra-low global warming potential natural refrigerants such as R-290 (propane) and R-170 (ethane) for the cascade high- and low-temperature stage have been considered. From the thermodynamic analysis, it can be concluded that liquid and vapor injection cascade configurations are the most energy-efficient. More specifically, those containing a vapor injection in the low-temperature stage (0.89 coefficient of performance, 40 % higher than traditional configurations). Then, using an internal heat exchanger for such low temperatures is unnecessary in terms of energy performance. The optimum intermediate cascade temperature varies significantly among cycles, from −37 °C to 2 °C, substantially impacting energy performance. Parallel compression configuration improves energy performance over single-stage cycles, but not as much as multi-stage (between 20 % and 30 % lower coefficient of performance). For most of low-temperature cycles, the high-temperature stage can be based on a single-stage cycle while keeping the maximum coefficient of performance.
Introduction
Ultralow-temperature refrigeration consists of cooling a particular product or room below a specific temperature, generally below -50 • C [1]. The recent appearance of the Sars-CoV-2 vaccines has put the light on a problem that has been present in society for a long time, deep freezing. Pfizer-BioNTech announced that its vaccines must be stored between -60 • C and -80 • C [2]. Consequently, news reports have echoed, alleging a huge logistical problem to keep vaccines in such conditions.
Refrigeration at this temperature range is usually based on vapor compression systems. There is a lack of studies and regulations in the ultralow-temperature range to motivate advanced environmentallyfriendly solutions [3]. The impact of energy efficiency on equivalent carbon dioxide emissions requires studying the broadest combination of configurations in detail. Cascade and auto-cascade systems are configurations typically found in commercial low or ultralowtemperature freezers [4], working between 20 • C and 30 • C ambient temperature and -50 • C to -80 • C freezing conditions [5].
The cascade configuration thermally connects single-stage cycles through a cascade heat exchanger, choosing the most suitable refrigerant for each temperature level. Most of the available studies for this application consider only two stages [6]. Compared to a two-stage configuration, Mumanachit et al., [7] observed that a two-stage cascade is more efficient below the coefficient of performance (COP) optimal point and cost-effective below -46.2 • C. Mateu-Royo et al., [8] observed that a two-stage cascade becomes the most appropriate configuration for high temperature lifts (60 • C and above).
Furthermore, the control of operational parameters of two-stage cascades is essential for proper energy performance. Chung et al., [9] observed that a higher low-temperature (LT) compressor discharge pressure allows a lower evaporator temperature. In addition, an inadequate pressure adjustment can cause fluctuations in the compressor operation and the intermediate cascade temperature distribution. Lee et al., [10] concluded that the COP increases with increasing LT evaporation temperature but decreases with increasing high-temperature (HT) condensation temperature and temperature variation. Chae and Choi [11] showed that the COP is lower when the system is undercharged because the heat transfer capacity decreases. Deymi-Dashtebayaz et al., [12] proposed the Pareto front curve to obtain the optimal operational conditions and refrigerants considering maximum COP, maximum exergy efficiency and minimum total cost rate. R-41/R-161 and R-41/R-1234ze(E) present the highest COP and exergy efficiency and lowest total cost rate.
Cascade configurations based on single stages can be modified by adding elements such as the internal heat exchanger (IHX, also known as liquid-to-suction heat exchanger). Di Nicola et al., [13] proposed an IHX in the LT stage, concluding that it could be helpful. Bhattacharyya et al., [14] optimized a cascade system with IHX in HT and LT stages and observed that the system performance does not depend on the IHX effectiveness. Liu et al., [15] concluded that the COP is lower if only the LT IHX operates, but the cycle with IHX in both stages has the potential to be energy efficient. Also, Dubey et al., [16] observed that the HT IHX impact on system performance is higher than that of the LT stage.
Due to the greenhouse effect caused by traditional refrigerants, the European Union approved the (F-Gas) Regulation 517/2014 [21], which aims to reduce refrigerants' equivalent carbon dioxide emissions by twothirds in 2030 compared to 2014 levels. Scientific articles published about refrigerants show that studying environmentally friendly fluids in cascades is essential.
Many previous studies considered ammonia (R-717) and carbon dioxide (R-744) for low temperature refrigeration (down to -40 • C evaporation temperature). Dopazo et al., [22] quantified with R-744 and R-717 that an increase in the evaporator temperature from -55 • C to -30 • C leads to a 70 % higher COP. In the same way, an increase in the condenser temperature from 25 • C to 50 • C causes a 45 % lower COP. Additionally, if the intermediate cascade temperature increases from 3 • C to 6 • C, it causes a 9 % COP reduction. Di Nicola et al., [13] compared different hydrofluorocarbons (HFCs) with R-717 at -70 • C, concluding that the latest is 5 % superior in COP. Getu and Bansal [23] found with R-744 and R-717 that a higher superheating degree and mass flow rate decreases the COP. Still, it can be counteracted with a higher subcooling degree in both stages. Eini et al., [24] compared R-744/R-717 and R-744/R-290 pairs, concluding that the one with R-717 is inherent safety. Ust and Karakurt [25] concluded that R-717 in the HT stage causes higher energy performance than R-290, R-404A, and R-507. Turgut and Turgut [26] tested the refrigerant pairs R-744/R-717, R-744/R-134a, and R-744/R-1234yf, observing that R-744/R-1234yf is superior as regards efficiency and annual costs. On the other hand, subcooling and superheating degrees have a negligible influence on the cost of the equipment.
According to Sun et al., [27], an optimal intermediate cascade temperature is essential, and R-41 is appropriate to replace R-23. Kilicarslan and Hosoz [28] expanded the selection of refrigerants by proposing R-152a/R-23, R-290/R-23, R-507/R-23, R-134a/R-23, R-717/R-23, and R-404A/R-23 to assess the influence of the operating temperature on the COP. Aktemur et al., [29] considered other novel refrigerants such as R-1243zf, R-423A, R-601, R-601a, R-1233zd(E) and RE-170, concluding that R-432A shows the lowest energy performance. Mota-Babiloni et al., [30] [32] adapted a standard low-temperature R-290 packaged unit with R-170 between -80 • C and -65 • C evaporating temperature, and they measured COP between 0.6 and 1.6. The literature shows that cascade systems at ultralow-temperatures have hardly been studied. Only a few works propose IHXs, but there are more possibilities to modify single-stage cycles composing cascade configurations and increasing overall energy performance. Moreover, few works consider the refrigerant pair R-290/R-170, which can be the most promising in terms of energy performance and environment protection. Most papers dealing with cascade configurations hardly reach the extreme evaporation temperature of − 80 • C. Because of the recent interest in this application and the evident lack of studies, this work proposes the combination of different vapor compression cycles in twostage cascade configurations. Firstly, the cycles combined and are presented. Next, the methods and strategy of the simulation, including equations, input parameters and refrigerants, are exposed. Then, the computational simulation results are analyzed and discussed, focusing on COP, optimum intermediate cascade temperature, and mass flow rate. Finally, the main conclusions of the study are summarized.
Methods
The methods will explain the configurations used, from the standard vapor compression cycles to the cascade ones. A strategy has been used from the assumptions to the final modeling details.
Configurations
This article combines six standard vapor compression cycles in the high and low-temperature stages of a cascade configuration. As ultralow-temperature applications must cover a remarkable temperature lift (difference between condensation and evaporation temperatures), cascade configurations could benefit from an improvement in stages. Therefore, all possible combinations are simulated in the same operating conditions, and the energy performance is assessed. These cycles that are combined in the paper are shown in Fig. 1.
A total of 42 configurations are defined, considering them in both possible cascade stages. Table 1 describes all configurations simulated in this paper and the abbreviature proposed for simplifying the analysis. Fig. 2 illustrates the configuration of a two-stage cascade based on two single-stage cycles (S+S). This configuration will be used as a baseline, and other configurations will replace single-stages to achieve the maximum number of possible combinations. The intermediate cascade temperature is decisive in the system energy performance, among other factors.
The configurations explained above have been calculated with the following strategy.
Strategy
The simulation of the configurations is based on the methods presented in Fig. 3, where the input parameters are configuration, refrigerants, and boundary conditions and assumptions. This model is developed using the software Engineering Equation Solver (EES) [33] version Academic Commercial V100.835-3D. The Golden Search Algorithm incorporated in this software is used to find the optimum intermediate cascade temperature that maximizes overall COP.
Other information required for the modeling is exposed in the following subsections.
Table 1
List of configurations studied in this paper.
HT LT Abbreviature
Single
Boundary conditions and assumptions
The input parameters used in calculating the configurations are shown in Table 2. Evaporating temperature is set at the typical minimum value of ultralow-temperature freezers, even though they can work from − 50 • C. Hence, this temperature was selected to cover the most critical condition. Then, the condensing temperature of 30 • C was chosen to simulate controlled room conditions. A cooling capacity of 10 kW is fixed to simulate medium capacity refrigeration conditions present in ultralow-temperature rooms. Superheating and subcooling degrees of 5 K and 2 K are selected to propose optimized systems with minimum influence of these parameters.
Isenthalpic expansion is assumed in expansion valves present in the circuit. Pressure drops and heat exchange with the ambient in components and lines are neglected.
Modeling common details
This section presents the common equations used in the modeling process of each cycle configuration.
The refrigerant mass flow rate is calculated using the cooling capacity of the evaporator, Eq. (1).
The isentropic efficiency is used to calculate the thermodynamic state at the discharge of the compression stage, Eq. (2), which is the ratio between the ideal specific compression work and the real one.
Regarding the isentropic efficiency of the compressors, Eq. (3) is proposed. It is expressed in terms of compression ratio, Eq. (4).
The compression model is based on manufactured data. Fig. 4 shows the validation of the proposed compressor model using manufacturer values and proves the excellent match between both data.
The compressor power consumption is expressed in Eq. (5) as the product of mass flow rate and the real specific compression work.
The total compressor power consumption has been calculated as the sum of all compressors (or compression stages), Eq. (6).
The coefficient of performance (COP) depends on the cooling capacity and the power consumption, defined as Eq. (7).
Cycles
Considering the common equations presented in the previous subsection, each cycle incorporated in the cascade stages has particular characteristics. Additional equations and remarks for the cycles are exposed in the following, considering the schematics seen in Fig. 1.
Single-stage cycle with internal heat exchanger
This cycle has a heat exchanger that receives the refrigerant from the liquid and suction lines, hot and cold sides. Then, the vapor suctioned by the compressor and the liquid before the expansion valve has extra superheating and subcooling degrees, respectively. This heat exchanger is calculated using energy balances considering the hot and cold fluid, Eq. (8). The effectiveness of all heat exchangers existing in the configurations is set at 40 % to control excessive discharge temperature [34], Eq. (9).
Vapor and liquid injection two-stage cycles
These cycles (vapor injection and liquid injection) have a mass flow rate at an intermediate pressure, which joins the liquid line with the high-pressure compressor suction line. The intermediate mass flow rate of the two-stage cycle with liquid injection has been established at 30 % of the evaporator's low-pressure refrigerant mass flow rate. The intermediate flow rate of two-stage cycles with vapor injection has been based on setting the total superheating degree of the high-pressure compressor at 5 K. Required equations come from the energy and mass balance in the pipe joints, Eq. (10) and Eq. (11).
The intermediate pressure of the compressors in two-stage cycles has been established using Baumann and Blass correlation shown in Eq. (12).
Parallel compression with and without economizer
This cycle is based on two compressors, but they work parallel and Table 2 General conditions for the cycle comparison.
Parameter Value
LT evaporating temperature − 80 • C HT condensing temperature 30 • C LT cooling capacity 10 kW LT and HT superheating degree 5 K LT and HT subcooling degree 2 K Fig. 4. Isentropic efficiency result of compressor modeling and manufacture data.
share the discharge point. Therefore, Eqs. (10) and (11) are applied before the condenser. A variation of the parallel compression cycle is the addition of a heat exchanger named economizer.
To calculate the necessary mass flow rate through the intermediate line, the economizer effectiveness is set at 80 %. Using the equation for the effectiveness of the heat exchanger, it is possible to determine the necessary mass flow rate and subsequently the enthalpy using an energy balance in the economizer itself. Eq. (13) shows the effectiveness, and Eq. (14) shows the energy balance.
Cascade cycle
The modeling of the cascade cycles consists of two different stages but joining them through a cascade heat exchanger. First, the LT condenser temperature is optimized considering the maximum COP. Then, the parameters of the HT evaporator are obtained considering a temperature difference of 5 K between the LT condensing and the HT evaporating temperatures, Eq. (15). It is supposed that the HT evaporator absorbs all heat provided by the LT condenser.
Refrigerants
Two natural refrigerants with appropriate operational and pressure-temperature characteristics are selected for direct cycle comparison: R-170 (ethane) in the LT stage and R-290 (propane) in the HT stage. The thermodynamic states of the refrigerants are incorporated in the simulation software, EES. The working fluids' main properties (physical, chemical, toxicological, and environmental) are included in Table 3.
As seen, both refrigerants are highly flammable (A3), and additional measures could be taken, depending on the placement of the system and the final refrigerant charge. Besides, they have a very low GWP, and therefore, they are future-proof natural refrigerants that environmental regulations would not restrict. The normal boiling temperature ensures that they can be used without entering a vacuum in each stage. The critical temperature allows subcritical operation. The relatively high heat of vaporization enable them to be refrigerants with a considerable expected refrigerating effect. Moreover, they can be used with different commercially available lubricating oils. Fig. 5 presents the T-s and P-h diagrams of these refrigerants used in each stage.
Results
This section presents and analyses the main results of the proposed cycles, focusing on the main parameters of interest from an operational and energy point of view: coefficient of performance (COP), intermediate cascade temperature, and mass flow rate.
Coefficient of performance
The COP analysis is divided into subsections according to the primary cycles in which the cascade configuration is based. Fig. 6 shows the COP of the cycles considered in this study without combining them in cascades. This can help illustrate the ability of these cycles to cover high-temperature lifts. In this case, a single refrigerant is considered, R-170.
Base cycles
The use of single-stage cycles for such a high-temperature lift is unfeasible. The compressor cannot compress the very high-compression ratio (caused by the temperature lift). Because of that, the compression ratio must be split into more stages. Also, in the case of having a capable compressor, it must absorb a very high amount of energy to compress the refrigerant from suction to discharge pressure for a pressure ratio of 20. This high compression ratio also causes excessive energy consumption. In addition, the discharge temperature is outside the operating range of any compressor (156.5 • C) because there is no intermediate cooling during the compression process. The same happens using an IHX, but the discharge temperature will worsen because of a higher suction temperature.
The two-stage cycle with liquid injection can control the issue of excessive discharge temperatures and compression ratio. The refrigerant is cooled down during the compression stage by introducing liquid from the condenser. Henceforth, this modification results in a COP of 0.32. This low COP is because this cycle forces the refrigerant for ultralow temperatures to work at the higher temperatures for which it is aimed. R-170 has a critical temperature of 32.17 • C, close to the condenser temperature of 30 • C. Therefore, it immediately reaches saturation; the isenthalpic valve places the evaporator inlet at a very high enthalpy, causing an increased mass flow rate that increases the compressor power consumption.
The two-stage cycle with vapor injection can be modeled on two premises: setting the mass flow rate or the superheating degree at the high-pressure compressor suction. For the latter case, the mass flow rate at the intermediate pressure is very high because the enthalpy at this point corresponds to saturated vapor. This causes the mass flow rate through the second compressor to be extremely high, increasing the power consumption (at a constant cooling capacity), so the COP is reduced. On the other hand, by setting the mass flow rate to pass through that intermediate line, the cooling effect of the vapor is lower but sufficient so that the discharge temperature is no longer out of range. At the same time, the increase in the refrigerating effect of the evaporator causes the necessary mass flow rate of the cycle to be considerably lower.
Consequently, the necessary compressor power consumption is significantly reduced. Consequently, the COP is 0.83 for the two-stage cycle with vapor injection. Using the other premise, setting the superheating degree causes a very low COP as it has a mass flow rate ten times higher. The problem is again the low critical temperature which makes this cycle unfeasible.
The parallel compression is the last configuration used as a base in the cycle. This way, it is not possible to calculate it because of the same issue as the single-stage cycles. In this case, a second compressor has an excessive compression ratio and cannot operate. In the case of using an economizer, the same occurs.
The single-stage cycle and its successive modifications show that an acceptable COP cannot be obtained. Compressors cannot compress such a high compression ratio, and also refrigerants capable of reaching such ultralow temperatures have a low critical temperature. Fig. 7 shows the COPs of the cascade systems in all possible combinations in which parallel compression is not considered. Therefore, sixteen configurations are analyzed in this subsection.
Cascade systems
Once the base cycles have been discussed, they can be combined in a cascade. The two-stage cascade cycle based on single-stages yields a COP of 0.62. The division of the compression process into two compressors explains this significant improvement, as exposed in the previous section. By dividing into two stages and placing a specific refrigerant to the temperature level, the maximum discharge pressure of each stage is approximately 10 bar. In addition, the effect of the isentropic performance of the compressors also has a substantial influence.
By introducing an IHX in each stage, the COP remains similar, 0.62. Two two-stage cascade configurations have been modeled, each with the IHX in a different stage (HT or LT), to explore the variation of the COP. In both cases, the COP remains similar. The IHX in the HT stage, the COP is 0.63, whereas when placed in the LT stage, it is 0.61.
By separating the compression process and introducing a two-stage cycle with liquid injection in both stages, the COP increase to 0.77. The refrigerant injection explains the increase in the intermediate stage, which allows a lower compression ratio and an increase in energy performance. To a lesser extent, the efficiency of the compressors also leads to an improvement in COP because the compression slope is less steep. As the cascade reduces the partial compression ratio, dividing each compression ratio again, the performance increase less noticeable. When the liquid injection is only used in one of the two stages, the COP is similar or even lower. A two-stage with liquid injection cycle in HT and a single-stage cycle in LT decreases the COP to 0.74, whereas the reverse configuration results in a COP of 0.76. If an IHX is used in the singlestage cycle, more considerable variations can be observed than in the two-stage cascade. The COP decreases when the IHX is placed in the LT single-stage cycle (0.73), whereas it rises in the HT stage, 0.77.
The vapor injection causes the same effect as in the single-stage cycle. It enhances the COP until 0.84 when introduced in both stages. Besides, it causes a greater COP placed only in the LT stage (0.89) than in HT (0.85). A similar COP is observed when an IHX is introduced in the single-stage cycle. In the option with vapor injection in HT and singlestage with IHX in LT, the COP decreases to 0.84. In contrast, in the reverse cycle, the COP remains at 0.89.
At this point, it is necessary to analyze the cycles with vapor injection in HT and also those with liquid injection in HT because the results of the COP of 0.85, 0.84, 0.74, and 0.73 are not the maximum obtained at the optimum temperature point. The problem is that the optimal intermediate cascade temperature would appear below the point with this COP, resulting in 0.9 the ones with vapor injection. The problem is that R-290 stage minimum pressure is around 0.6 bar at those temperatures, which is not convenient in operational terms. Consequently, the intermediate cascade temperature has been raised manually. In the same system described above, the single-stage cycle can be changed to a two-stage with liquid injection. One stage with vapor injection in HT and liquid injection in LT and another with vapor injection in LT and liquid injection in HT are considered. Both versions offer different COP, being 0.87 and 0.83 each. Fig. 8 shows the COP of the cascade systems with a parallel compression cycle in high or low stages. The remaining combinations (twenty) are covered by this subsection, reaching 42 configurations studied in this article.
Cascade systems with parallel compression
The resulting COP for a parallel compression cycle in both stages is 0.71. By incorporating an economizer, the COP does not vary significantly, 0.72. These results are because the consumption of the second compressor is relatively low due to the cascade cycle. In addition, the parallel compression makes the consumption of the first compressor also low because of the reduced mass flow rate.
This system in parallel compression can, in turn, be combined with the rest of the cycles in a stage. By combining a parallel compression system with a single-stage cycle, the COP is modified depending on cycle configuration. When combining parallel compression in HT and a singlestage cycle in LT, a COP of 0.69 is obtained. If an IHX is introduced, a COP of 0.68 is observed. In the case of placing the reverse, that is, the single-stage cycle in HT, an increase of COP to 0.75 is obtained and when introducing an IHX remains at 0.67.
When replacing the single-stage cycle with a more complex cycle such as the two-stage with liquid injection, a COP of 0.71 is obtained for the parallel compression cycle in HT and liquid injection in LT, while being the reverse, the COP stands at 0.66. On the other hand, adding a vapor injection improves each cycle. With a parallel compression cycle in HT and LT two-stage cycles with vapor injection, the COP increases concerning the liquid injection cycle, reaching 0.74. In contrast, a twostage cycle with vapor injection in HT improves COP to a lesser extent (0.72).
Similar results are observed when changing to an economizer in the parallel compression cycle. In this way, by having a single-stage cycle in LT, the result is 0.68, while in HT is 0.67. By introducing an IHX, COP keeps in 0.67 and 0.68, respectively.
When replacing the single-stage cycle with a two-stage with liquid injection cycle as in the previous case, a slight increase in the COP is observed, reaching 0.70 with liquid injection in LT. Instead, the liquid injection in HT causes lower COP (0.68), always maintaining the parallel compression with economizer in the other stage. Finally, when adding a two-stage cycle with vapor injection, in the case of LT, the COP is 0.74 and in HT 0.76.
The last two cycles analyzed are the combination of parallel compression with and without economizer. The parallel compression in HT and parallel compression with economizer in LT results in a COP of 0.73. In the other case, in parallel compression with economizer in HT and parallel compression in LT, the COP decreases to 0.70.
Intermediate cascade temperature
As previously mentioned, the COP of all cycles has been maximized by employing the optimum intermediate cascade temperature. Fig. 9 shows the values of the optimized LT condensation temperature for all possible combinations studied in this work.
As can be seen, most configurations have their optimum intermediate cascade temperature roughly midway between condensation and evaporation temperatures. However, some cycles have their optimum intermediate cascade temperature closer to the HT condensation or LT evaporation temperatures, as is the case of cycles with a vapor injection. A vapor injection causes a greater temperature lift in the component's stage. This effect occurs both when incorporating it only in HT and only in LT, and it can also be observed that this stage accounts for approximately 75% of the temperature lift. Most of the temperatures of these cycles tend towards the opposite side of the vapor injection. This effect is more evident in the case of LT, when cycles like S+V, I+V, or L+V have the optimal intermediate cascade temperature closer to 0 • C. A particularly striking case is the cycle with vapor injection in both HT and LT. In this case, the intermediate cascade temperature is placed approximately in the middle of the total temperature lift, -24 • C.
Mass flow rate
Mass flow rate is another essential parameter to analyze the operation of the cycles. In Fig. 10, a graph is shown with the mass flow rate results of 38 of the 42 cycles that can be calculated, considering that the LT cooling capacity required for all configurations is the same, 10 kW.
The stages that contain a vapor injection tend to a lower mass flow rate. Contrarily, the stages that include a single-stage tend to a higher mass flow rate. Also, the LT stages have a lower mass flow rate than HT. This is typical of the cascade cycles because the LT condenser heat exchange is higher than the evaporator, so the heat exchange of the HT evaporator must be higher as well as the mass flow rate.
On the other hand, the mass flow rates, including single-stage cycles, are worth mentioning. In the case of cycles with vapor injection, the mass flow rate is minimal because it only has a stage, but it is not viable due to high discharge temperature. The cycle with liquid injection has a very high mass flow rate. This is due to the properties of the refrigerant itself. To reach such low temperatures, it is necessary to use a refrigerant that does not work at high temperatures. Because of that, R-170 has a critical temperature close to the 30 • C of the condenser, at a reduced latent heat. When adding a vapor injection or a heat exchanger, the increase in the latent heat is considerable, causing the mass flow to decrease substantially.
Conclusions
The lack of studies in ultralow-temperature refrigeration makes this field yet to be optimized and analyzed. However, the sector has not been studied in-depth beyond basic two-stage cascade cycles or three-stage cascades. The operational and energy performance of many cycles have been simulated considering natural refrigerants R-170 and R-290 in the low-and high-temperature stages, respectively. The following conclusions can be summarized.
On the one hand, the need for a minimum two-stage system to obtain acceptable performance is clear, and that the ideal configuration is a cascade. This is because by adding more stages, the pressure lift of the compressors decreases. Consequently, the COP of the stages increases because of a lower compressor power consumption. If only a cascade based on single cycles is preferred, an IHX is not a viable option. Incorporating the two-stage cycle with liquid injection causes an increase in the COP that is even higher when incorporating a vapor injection. Also, the two-stage cycle with vapor injection has a remarkable energy performance. Still, the lack of refrigerant capable of working with such a large temperature lift causes very high discharge temperatures and is unacceptable.
Other conclusions related to these technologies are that the vapor injection works at higher energy performance in the LT stage than in the HT stage. The parallel compression cycle improves the results of a singlestage cycle, but is not enough to be considered. In addition, the use of a compressor for the entire stage, being LT or HT, makes it expected to have a shorter useful life as it has a higher workload than the rest. Another critical aspect is the impossibility of using a parallel compression cycle with an economizer if the temperature lift is very high and requires a refrigerant that cannot work at standard or high temperatures.
To sum up, the cycles with the highest energy performance are two-stage with vapor injection in LT. Single-stage, single-stage with IHX and two-stage with liquid injection in LT with two-stage with vapor injection in HT offers the highest COP (0.89 and 0.87 the last one), 43.5 % higher with the same refrigerants than a two-stage cascade cycle based on single-stages (COP of 0.62). Parallel compression cycles offer a COP between 20 % and 30 % worse than those mentioned above. Cycles with single-stage with IHX offer similar COP to the single-stage cycles, making the IHX unnecessary. Future research can study the influence of other refrigerants (pure and mixtures) on the energy and operational performance of the proposed cycles, particularly the most promising ones. Moreover, the energy performance must be validated through measurements in an experimental setup. A multi-parameter evaluation involving exergy, environmental, and economic (or its combination) analyses could enrich and complement the assessment provided in this paper.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-06-26T15:11:29.320Z | 2022-06-24T00:00:00.000 | {
"year": 2022,
"sha1": "4bea09e73ea9257de99ddb8636ff3b11516c8a70",
"oa_license": "CC0",
"oa_url": "http://repositori.uji.es/xmlui/bitstream/10234/200204/1/Udroiu_2022.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c843d365a2848a01355963886f72fdc0d28517f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125606168 | pes2o/s2orc | v3-fos-license | APPLICATION OF THE FLUX BENDING EFFECT IN AN ACTIVE FLUX-GUIDE FOR LOW-NOISE PLANAR VECTOR TMR MAGNETIC SENSORS
A concept of a planar vector magnetic sensor comprising in-plane tunnel magnetoresistive (TMR) sensors and an active flux-guide (AFG) was introduced in this work. The AFG redirected the magnetic flux at high-frequency benefiting a vertical detection capability and suppressing the field noise of the TMR at low-frequency measurement. The vertical sensitivity of 19.5 V/T was close to the in-plane sensitivity of 19.2 V/T. In addition, the 1-Hz field noise was suppressed from 6 nT/ Hz down to 0.4 nT/ Hz. The flux bending effect of the AFG was also verified by the angular measurements with the angular deflection was found to be about 50o. It revealed that the vertical field component was certainly detected by the in-plane sensor, and the proposed method was a feasible approach for the development of the low-noise planar vector magnetic sensor.
Tunneling magnetoresistance sensors (TMR) were widely used in the magnetic sensing applications by its advantages of high magnetoresistance ratio, low cost and low power consumption [1,2], e.g. current measurement, electronic compass, automation, and geomagnetic applications [3]. To expand the applications of the magnetic sensors, the high-performance magnetic sensor should be developed, including the miniature, the co-planarization, the multiaxis vector sensors and so on. In the past decade, many research groups developed the vector magnetometers using the solid-state sensors [4,5] owing to its advantages, including the CMOS compatibility of TMR so that the devices have been easily integrated. Excluding the traditional three-axis design that used three sensors for three axes, in which three sensors were aligned respectively their sensing directions along three vectors of the Cartesian coordinate system [6]. Whereas, the flux-guide technique was proposed in recent years to construct a vector magnetometer using co-planar sensors [4,5]. In fact, flux-guide helps to induce the in-plane magnetic field component, which can be measured by an in-plane sensor. However, the drawbacks of the flux-guide are hysteresis of its materials and there is no contribution to improving the noise of the incorporated sensors. Although, the detectivity of the TMR sensor has been known on the order of pico-tesla (pT) [7], and noise spectral density was less than 1 nT/ Hz at several of kilohertz [8]. Unfortunately, the resolution of TMR sensors in low field measuring applications was severely restricted by the low-frequency (1/f) noise. Many efforts to suppressing 1/f noise have been reported, e.g. the micro-electro-mechanical-system (MEMs) flux concentrators (FC) [9,10] and modulated flux densities [11]. The modulation technique said that the hysteresis of a magnetic sensor incorporated with a soft magnetic FC could be eliminated by modulating the permeability of the FC played as an active flux-guide (AFG) [12]. In our previous work proved that the 1/f noise of a TMR was improved by a factor of 12 and the noise level at 1-Hz was about 0.33 nT/ Hz@1Hz using a shielding chopper [13].
In this work, we present a concept of a vector magnetic sensor comprising a square hollow AFG, so that a TMR sensor was aligned at the position where flux was redirected nearby the outer edge of the AFG. Therefore, the vertical field component (B z ) would be redirected becoming an in-plane magnetic field component, which easily caught by a planar sensor. The working principle of AFG can be interpreted as follows. Normally, with only tubular Metglas core, its function likes a passive flux-guide, which induced the in-plane field components. Besides, the flux density nearby sensor is concentrated leading to boosting the sensitivity of the sensors. In this design, the permeability of the core material was oscillated leading to the core being switched from the unsaturated to the saturated states. It means that the flux densities around the sensor were modulated at high frequency, which helps to move the operating point of the TMR sensor to the high-frequency regime, where there is the only white noise level. Thus, the 1/f noise of the TMR sensor is suppressed. The AFG is not only contributed to suppressing 1/f noise but also a promising method to develop a vector magnetometer for sensing geomagnetic field using planar TMR sensors. The sensitivity, angular responses, and noise characteristic of the engineered sensor were realized and verified. The experimental results are presented.
EXPERIMENTAL
The concept of a planar TMR sensor for the vertical detection with an AFG is shown in Fig.1. The TMR sensor used in this work was the commercial TMR2102 sensor provided by MDT Inc. [14]. The sensor has the inner structure consisting of four active TMR arrays formed in a full Wheatstone bridge. Each TMR array was circuited in series of the hundreds of magnetic tunnel junction (MTJ). The sensor's nonlinearity is below 1% in the range of ±30 Oe. The intrinsic sensitivity of the sensor is 4.9 mV/V/Oe within the linear range [14]. The AFG constructed by a soft magnetic core and an excitation coil. The AFG was designed in a square hollow shape, which was 14 mm in length and 7 mm of the square edge. In our previous design [13], the cylindrical chopper was used. However, the edge of Metglas core hardly bends in a perfect circle leading to the distribution of the flux density being uniform. Hence, this design with a square shape, the edge of the metglas was a line. It helps easily to form the shaped tube and the flux distribution is more uniform. The core's material is the Metglas-2714A, from Metglas Inc. [15]. The thickness is about 0.6 mil (~ 15 µm). The metglas is easily saturated with an external magnetic field below 0.1 Oe [15]. The single core of the AFG was wrapped with copper wires to modulating the permeability of the Metglas. The number of turns of the excitation coil was about 100, which induced a magnetizing field of 0.2 Gauss for the modulation. The wiring configuration of the AFG is shown in Fig. 1. Fig. 1(a) shows the complete isometric view of the AFG design, while Fig. 1(b) show the sectional top view of the AFG, including the metglas core and copper wires. Figs. 1(c) and (d) show the sectional side view of the AFG in the simulation. In order to simply show the bending effect of the AFG, the cutting side of the AFG was chosen in the 2D simulation so that two sides of the AFG tube are illustrated by two slide bars. The simulation was carried out using Maxwell V16.0 simulator to show the bending effect of the AFG [16]. The TMR sensor was placed at the position nearby the outer edge of the chopper tube, where the in-plane magnetic field component induced by the AFG could be detected, as illustrated in Fig 1(d). In Figs. 1(c) and (d), only applied field was shown, while the magnetizing field induced by excitation coil was not. When the magnetizing field is ON (current ON), it magnetizes the metglas core so that the permeability is low leading to the applied magnetic field is homogeneous, as shown in Fig. 1(c). Whereas, when current is OFF the core is demagnetized so that the core has an extremely high permeability making the flux density being concentrated, which was so called "bending effect", as shown in Fig. 1(d).
The optimal position of the TMR with a flux-guide has also been reported by the author in previous work [12,13,17]. The sensor must be placed as close as possible to the outer edge of the chopper tube. The flux bending effect of an AFG could be estimated by the function between the sensitivity and the excitation current. Fig. 2 shows the bending effect of the AFG with DC current passing the excitation coil. TMR sensor was biased by a 3.5 V DC voltage. The effective sensitivity changed from 109 V/T to 38 V/T with the current changing from 0 A to 0.4 A, respectively. Due to the trade-off between the power consumption and the effective sensitivity, and the sensitivity was slightly reduced from 0.2 to 0.4 A of DC current so that the amplitude of the excitation current has been set at 0.2 A of a square wave. The square wave excitation signal was set about 1 kHz that far enough from the 1/f corner. The excitation signal was generated by an Oscillator 2 MHz and divided by a binary divider (CD4020). The output of the sensor was amplified via an instrumentation amplifier (INA129), and a lock-in amplifier using a mixer of AD630. An active low-pass filter (LPF) with a cutoff frequency of 10 Hz was used to narrow the bandwidth for further suppressing noise and retrieving the dc signal that is proportional to the measuring magnetic field.
The sensitivity of the sensor was determined by taking the slope between the referenced magnetic field and the response output of the TMR sensor. The reference magnetic field was generated by a Helmholtz coil with the sweeping frequency of 0.5 Hz and amplitude of ± 60 µT. The sweeping signal was generated by a multifunction synthesizer (HP-8904A). The applied field orientations were installed in three cases: U x (B x ), U x (B y ), and U x (B z ). The sweeping fields and the demodulated output were recorded by a data acquisition device (NI-MyDAQ) from the National Instruments. To analyze the noise spectral densities, the sensor was shielded by a tri-layer magnetic shielding to set zero fields around TMR sensor for preventing the interference from the magnetic fields induced by the electrical equipment. The shielding chamber was made by Mu-metal with an extremely high permeability (> 100000). The thickness of the Mu-metal was about 1 mm. The dimension of the chamber was designed in 500 mm high and a diameter of 200 mm. The space between each layer was about 20 mm. The TMR sensor with AFG was placed at the center of the shielding chamber and three caps of the shielding chamber were also made by Mu-metal. A spectrum analyzer, HP-3582A, was used to record the noise signal. The responses of the sensor to each external magnetic field were measured the peak-to-peak output of the sensor when the TMR sensor was manually being rotated in the tri-axis sweeping fields of the B x , B y , and B z of the three-dimensional Helmholtz coils and their strength was about ± 60 µT. Table 1 shows the sensitivities of the sensor with and without of the AFG. In the case of no AFG, the applied field was set up in parallel to the normal sensing direction of the sensor, U x (B x ), the obtained sensitivity was 165 V/T. Whereas in the case of the external field was applied along the z-axis, U x (B z ), an obtained low sensitivity was about 1.3 V/T. It indicated that the TMR sensor was nearly insensitive to the vertical magnetic field. A low vertical sensitivity may be caused by the cross-detection error. In the case of the AFG, the sensitivity of the sensor was 19.2 V/T, which was resulted by the modulated efficiency. When incorporating the sensor with an AFG, the exposed magnetic field was bent from out-of-plane to the normal sensing direction of TMR, U x (B z ), the sensitivity of the sensor was about 19.5 V/T and the maximum sensitivity of 21 V/T at the 50º from the normal sensing direction of the TMR, where was the angle between the x-axis and the z-axis. The obtained results revealed that the benefit of AFG to redirect the flux line from the vertical component to the horizontal component, which can be caught easily by a planar TMR sensor.
The outputs of TMR sensor response to the applied magnetic fields
The responses of the TMR sensor with and without of the AFG to each B x , B y and, B z components were carried out. The system was placed on an accurate rotation stage for verifying the angular response to the three sweeping magnetic fields, as shown in Figs. 3(a, b) and 4(a, b). The sensor was rotated manually in a completed 360º with the interval of 10º. The first, the TMR sensor was rotated about the square hollow axis of the AFG, i. e. along the z-axis (B z ). After that sensor was rotated about the y-axis (B y ) to verify the bending effect of the AFG. The experiments were set up with both cases of the TMR sensor without and with an AFG for comparison. Figure 3 shows the setup and the output of TMR sensor rotating about the axis of the square hollow AFG in the reference magnetic fields in both cases of without ( Fig. 3.a) and with the AFG (Fig. 3.b). The output of the TMR sensor without AFG was about ± 2 V smaller than that in the case of the TMR incorporated AFG was ± 6 V. The results were caused by the fact that we only focused on the bending flux so that we kept the small gain of 50 in the bare TMR case, and increased the TMR incorporated with AFG high enough, which was about 2350, for showing clearly response of the measurement. The angular responses (U x ) to the B x and B y were differed by 90 o , revealing that the referenced magnetic fields were pretty orthogonal. Additionally, the output (U x (B z )) of the bare TMR sensor (without AFG) was only 0.1 V, it seems to be the almost insensitive state to the B z of the sensor. Whereas, with AFG, the output of the sensor was constant at 7 V with the vertical magnetic field (B z ). It confirmed again that the vertical field component could be caught by a planar sensor owing to the bending effect of the AFG.
Rotation system about the y-axis
The angular bending effect could be figured out by the results of the rotating system about the y-axis. Interestingly, the responses of the TMR sensor in both case without and with the AFG were shifted by an angle of 50º. It indicated that the flux lines were bent by an angle that formed by the B z to the normal sensing direction of the TMR. The result was also consistent with the output of TMR could be reached a maximum of 8 V@ 50º (Fig. 4b). Whereas, in the without AFG case, the maximum of the TMR's output was reached a maximum value of 2 V@ 90º (Fig.4a). In Fig. 4b, the output of TMR at 50º was higher than that in the case of the TMR incorporated AFG and rotating sensor about z-axis by 90º, as shown in Fig. 3b, which could be interpreted by fact that the sensitivity of the TMR sensor at 50º was higher, as mentioned in Table 1. The response of the TMR sensor to the B y field component was almost constant owing to the external field was always vertically to the sensing axis of the TMR. Additionally, the other field components, B x and B z , were observed sinusoidally between ±8 V. The obtained sinusoidal responses of the TMR incorporated AFG revealed that due to the flux density about the sensor was switched between the saturated state to the saturated state of the Metglas core. The bending effect was active at the unsaturated state of the core, and the magnetizing field induced by the excitation coil is enough to magnetize and demagnetize the core. The bending effect is active in the unsaturation state so that the demagnetization of the core is dominant. Therefore, the response of the sensor incorporated AFG is only depended on the aspect ratio (the aspect ratio is 2 in this work and it is small enough) [17]. The AFG plays as a flux guide. Thus, flux density was also responded like the without an AFG case in a completed rotation of 360º.
Impact of the AFG on the 1/f noise
The spectral densities of the field noises of TMR sensor with and without the AFG are shown in Fig. 5. The intrinsic noise of the sensor was presented for evaluating the performance in the reducing 1/f noise by the reduction ratio, which was determined by taking the division between the TMR's noise without and with the AFG. The intrinsic noise of the bare TMR sensor was about 6 nT/ Hz@1Hz, and there was a small slope from 0.1-Hz to 10 Hz. According to the datasheet of the TMR, the 1-Hz noise could be higher (10 nT/ Hz@1Hz). With AFG, the noise spectrum was nearly flat within the frequency span recording. The 1/f knee was shifted to below 0.1-Hz. The minimum noise level of TMR sensor was reached to 0.4 nT/ Hz@1Hz. The noise reduction could be interpreted by the benefit of the phase sensitive detection (PSD) technique in this work. With the PSD technique, the measured signal (applied magnetic field) is extracted from the modulated signal. Besides, only the signals having the angular frequencies close to the excitation frequency will be passed through the PSD system (1 kHz in this work). Furthermore, the other components, which are approximate to the excitation frequency will be further filled via an LPF. Therefore, the noise was suppressed certainly by the chopping technique using an AFG.
CONCLUSIONS
We have shown the development and experimental validation of a low noise TMR sensor incorporated with an AFG for the concept of a planar vector magnetic sensor. The bending effect and lessening noise performance were proved in the AFG. One hand, AFG deflected the flux lines from out-of-plane to inducing the horizontal field components that could be sensed easily by the planar sensors. On the other hand, AFG enhanced the sensitivity of the incorporated sensor. Importantly, with the modulation flux density, the working point of the TMR was moved to the high-frequency regime, where there was not 1/f field noise. The field noise could be observed of 0.4 nT/ Hz@1Hz. The proposed concept of the vector magnetometer system can be used to develop a low noise three-dimensional magnetic field sensor using co-planar TMR sensors. Due to the bending angle between the vertical axis and horizontal axis was about 50º and the unavoidable misalignment of the sensors leading to three axes would certainly be not orthogonal so that a calibration process is needed. | 2019-04-22T13:13:09.840Z | 2018-12-17T00:00:00.000 | {
"year": 2018,
"sha1": "be2cd4eb02d243ff58f167d14248d53e2424e79a",
"oa_license": null,
"oa_url": "http://vjs.ac.vn/index.php/jst/article/download/12652/103810382724",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ee52b58f772fefd24f710661f8c04c2ec132180f",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
119431814 | pes2o/s2orc | v3-fos-license | Temperatures of Exploding Nuclei
Breakup temperatures in central collisions of 197Au + 197Au at bombarding energies E/A = 50 to 200 MeV were determined with two methods. Isotope temperatures, deduced from double ratios of hydrogen, helium, and lithium isotopic yields, increase monotonically with bombarding energy from 5 MeV to 12 MeV, in qualitative agreement with a scenario of chemical freeze-out after adiabatic expansion. Excited-state temperatures, derived from yield ratios of states in 4He, 5Li, 6Li, and 8Be, are about 5 MeV, independent of the projectile energy, and seem to reflect the internal temperature of fragments at their final separation from the system. PACS numbers: 25.70.Mn, 25.70.Pq, 25.75.-q
Recently, a caloric curve of nuclei has been obtained by correlating the values of temperature and excitation energy measured for spectator fragmentation in reactions of 197 Au + 197 Au at 600 MeV per nucleon [1]. The temperatures were derived from double ratios of helium and lithium isotopic yields while the excitation energies were obtained by adding up the kinetic energies of the product nuclei and the mass excess of the observed partition with respect to the ground state of the reconstructed spectator nucleus. The double-bended shape of the caloric curve and its similarity to predictions of microscopic statistical models [2][3][4], has stimulated considerable experimental and theoretical activities. In particular, the second rise of the temperature to values exceeding 10 MeV at high excitation energies has initiated the discussion of whether nuclear temperatures of this magnitude can be measured reliably (see Refs. [4][5][6] and references given in these recent papers) and whether this observation may indeed be linked to a transition towards the vapor phase [7,8]. Obviously, a well-founded understanding of the significance of the employed temperature observables [9] is indispensable when searching for signals of the predicted liquid-gas phase transition in nuclear matter.
Here, we present the results of temperature measurements for central collisions of 197 Au + 197 Au at incident energies E/A = 50 MeV to 200 MeV. These collisions are characterized by a collective radial flow of light particles and fragments which, over the covered energy range, evolves as a dynamical phenomenon closely connected to the initial stages of the reaction [10]. Global equilibrium is clearly not achieved. If local equilibrium is reached, the associated temperatures should reflect the adiabatic cooling of the rapidly expanding system.
Two temperature observables were used simultaneously. Isotope temperatures were deduced from double ratios of isotopic yields [11] and excited-state temperatures were derived from the correlated yields of light-particle coincidences [9,12,13]. It will become evident from the diverging results that this represents more than a methodical test and that the two types of thermometers are sensitive to different stages of the fragment formation and emission.
Beams of 197 Au with E/A = 50, 100, 150, and 200 MeV, provided by the heavy-ion synchrotron SIS, were directed onto targets of 75-mg/cm 2 areal density. Two multi-detector hodoscopes, consisting of 96 and of 64 Si-CsI(Tl) telescopes in closely-packed geometries, were placed on opposite sides with respect to the beam axis. Four high-resolution telescopes [4] were used to measure the isotopically resolved yields of light charged particles and fragments. The choice of angles, between θ lab = 24 • and 58 • for the hodoscopes and θ lab ≈ 40 • for the telescopes, was motivated by the aim of a good coverage at mid-rapidity.
Additional detectors were employed in order to probe the charged-particle multiplicity for impact parameter selection. The angular range of θ lab = 6 • to 20 • was covered by an azimuthally symmetric array of 36 CaF 2 -plastic phoswich detectors [14]. Within the angular range 30 • to 55 • , a solid angle of 0.7 sr was covered by an array of 48 elements of Si-strip detectors. The results presented in the following were obtained after selecting an event class of highest associated multiplicity, corresponding to about 10% of the total reaction cross section.
The populations of particle-unstable resonances were derived from two-particle coincidences measured with the Si-CsI hodoscopes. The peak structures were identified by using the technique of correlation functions, and background corrections were based on results obtained for resonance-free pairs of fragments with Z ≤ 3, such as p-d, d-d, up to 3 He-7 Li. Examples of correlation functions constructed for p-4 He and d- 3 He coincidences are shown in Fig. 1. They are dominated by the resonances corresponding to the ground state (g.s.) and 16.66-MeV excited state of 5 Li. This pair of states represents a widely used thermometer for nuclear reactions [12,13,15]. The observed weak peak intensities are expected for large source sizes. Correlated yields of p-t, d-4 He, 4 He-4 He, and p-7 Li coincidences and 4 He singles yields were also measured and used to deduce temperatures from the populations of states in 4 He (g.s.; group of three states at 20.21 MeV and higher), 6 Li (2.19 MeV; group of two states at 4.31 and 5.65 MeV), and 8 Be (g.s.; 3.04 MeV; group of five states at 17.64 MeV and higher). The probabilities for the coincident detection of the decay products of these resonances were calculated with a Monte-Carlo model [13,16]. The uncertainty of the background subtraction is the main contribution to the errors of the deduced temperatures.
The obtained values for two isotope and three excited-state temperatures are given in Fig. 2. The isotope temperatures T HeLi and T Hedt were derived as described in [4], and the correction factors given there have been applied in order to account for the effects of sequential feeding. The three excited-state temperatures are characterized by large energy differences of the considered states (for T Be8 the 18 MeV/ 3.04 MeV result is shown), and no corrections for sequential feeding were applied (fully justified only for 5 Li, see below and [12,15]). At E/A = 50 MeV, all temperature values coincide within the interval T = 4 to 6 MeV, an observation made also at E/A = 35 MeV by Huang et al. [17]. With increasing bombarding energy, however, the isotope temperatures rise approximately linearly up to T HeLi ≈ 12 MeV and T Hedt ≈ 9 MeV at E/A = 200 MeV. The excited-state temperatures, on the other hand, mutually consistent with each other, appear to be virtually independent of the bombarding energy. Their mean values, over the covered range of bombarding energies, are 4.6 ± 0.6 MeV, 5.1 ± 0.3 MeV, and 6.1 ± 0.7 MeV for T Li5 , T He4 , and T Be8 , respectively. These differences may be significant, and could even be enhanced by sequential-decay corrections (see below), but they seem marginal in comparison to the apparent qualitative difference between the isotope and excited-state temperatures.
The momentum-space acceptance of the detectors, kept at fixed positions in the laboratory, changes with bombarding energy in the center-of-mass frame. For the case of E/A = 150 MeV, the acceptance of the 96-element hodoscope for p-α coincidences in the momentum interval corresponding to the 5 Li-g.s. resonance is shown in Fig. 3 (top). It covers the region around θ cm = 90 • and, in addition, extends to forward and backward angles with a varying transverse-momentum acceptance. The wide acceptance and its shift with bombarding energy should not be crucial, however, because no significant variation of T Li5 within the covered momentum space was found (Fig. 3, bottom).
It is not immediately obvious that the divergence of the isotope and excited-state temperatures, growing dramatically with bombarding energy, contradicts the concept of a common fragment freeze-out at a single temperature. Xi et al. report that their recent statistical calculations indicate a strongly reduced sensitivity of the helium-lithium thermometer at high temperature, such that it may prevent reliable temperature measurements at T > 7 MeV [5]. Accordingly, a consistent common temperature, if existing, should be low. The excluded-volume effect, as incorporated in the quantum-statistical model by Gulminelli and Durand [6], causes a suppression of particle-unstable resonances decaying into loosely bound products, such as the 16.66-MeV excited state of 5 Li. It will have the effect that the apparent T Li5 is low while a common emission temperature may be high (cf. [18]). These calculations demonstrate that large effects can be caused by sequential decay and by structural differences of the nuclear states employed in the temperature measurements, even though they may not suffice to give a consistent explanation of all the present observations.
The dynamical evolution of the fragment formation has very recently been investigated with transport models [19], including nuclear molecular-dynamics [20,21] and quantum molecular-dynamics [22,23] models, applied to the present and similar reactions. These studies suggest that the asymptotic fragments can be identified at an early stage of the reaction, typically at ≈ 40 fm/c. These times coincide with the development of the collective flow component of the fragment motion [19][20][21]. If local chemical equilibrium has been reached the isotopic composition should reflect the temperature of the system at that particular time.
According to various flow analyses, between 40% and 60% of the collision energy is converted into collective flow energy [10]. In the simplest approximation, the breakup temperature is then estimated as T = (E/A)/12. This assumes complete stopping of the incident nuclei and a classical gas with 3 · 2A degrees of freedom carrying a thermal energy component of 50% of the collision energy. This relation (dashed line in Fig. 2) does not describe the data very well, but it illustrates the expected linear rise and shows that the measured isotope temperatures have about the right order of magnitude. Better agreement with the data at the lower energies is obtained if, for the same thermal energies, the experimental temperature vs. energy relation of Ref. [1] is used (Fig. 2, full line). Even though it remains to be understood why T Hedt is considerably lower in the present case (cf. [4,24,25]), the comparison suggests that the isotope thermometers are sensitive to the local temperature at freeze-out in a blast scenario [26][27][28].
The excited states used for the temperature evaluation are very specific quantum states with widths of 1 MeV or less. They are unlikely to exist in the nuclear medium in identical forms [19,29,30]. The observed asymptotic states can develop or survive only at very low densities that may not be reached before the cluster is emitted into vacuum. Accordingly, the excited-state populations should reflect the temperature and its fluctuations at this final stage of fragment emission. The molecular dynamics calculations show that a cluster continues to interact with the surrounding cooling and expanding matter for a considerable time after it has been formed [20]. This will lower its internal excitation but, apparently, does not change as much the isotopic composition.
Excited state populations have thermal characteristics [31] and have been shown to correspond to expected temperatures in compound reactions [15,32]. In the present case, the observed internal fragment excitations, associated with the final breakup of the system, are found to be consistent with a thermal population at T = 5 to 6 MeV (Fig. 4). The apparent temperatures T Li6 and T Be8−1 (3.04 MeV/g.s.), derived from states not widely separated in energy, are lower but in accordance with the side-feeding effects predicted by the quantum-statistical model [33].
With the internal excitations corresponding to lower temperatures, the side-feeding corrections for the isotope temperatures will be rather complex. While the role of highly excited continuum states may be reduced, the corrections to be expected may still be large. To give an example, for T HeLi , the quantum-statistical model predicts a modification of the isotopic double ratio by a factor of 1.5 at T = 5 MeV. It corresponds to a 20% modification of T at this temperature, but to larger relative corrections at higher temperatures (e.g., 30% near T = 10 MeV). Such corrections will have the effect of further increasing the slope of the isotope temperatures vs. bombarding energy and of improving the agreement with the simple expectations (Fig. 2).
The presented interpretation of the observed qualitative difference between the isotope and excited-state temperatures seems rather attractive. It implies that isotope yields may be used to probe the early stages of the fragment formation process, and it may explain the saturation of the excited-state temperatures that characterizes a wide variety of measurements at intermediate and relativistic energies [13,15]. This interpretation, therefore, should be confirmed by further work which may aim at a quantitative interpretation of the internal fragment excitation but also address current open questions such as the role of initial correlations (see, e.g., [13,20,23,34]) and of quantum effects in the fragment formation process [35][36][37][38] Li5 Li6 Be8-1 Be8-2 Be8-3 | 2019-04-14T03:07:25.998Z | 1998-01-22T00:00:00.000 | {
"year": 1998,
"sha1": "51d83b262386038c7487b0eb29aafadd2ece5fda",
"oa_license": null,
"oa_url": "https://discovery.ucl.ac.uk/1374658/1/1374658.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "51d83b262386038c7487b0eb29aafadd2ece5fda",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210996909 | pes2o/s2orc | v3-fos-license | Three new species of Meleonoma Meyrick from Yunnan, China (Lepidoptera, Gelechioidea, Xyloryctidae)
Abstract Three new species of Meleonoma Meyrick, 1914 (Gelechioidea, Xyloryctidae) from China, Yunnan Province, Meleonoma plicatasp. nov., M. scalpratasp. nov. and M. taeniatasp. nov., are described and illustrated. A key to Meleonoma species known from China is provided.
Introduction
This paper is a continuation of our taxonomic studies of the Chinese Meleonoma Meyrick, 1914. The last contribution of ours to this subject was a description of two new species from China with a review of its taxonomic history (Yin and Cai 2019), and yet, the examination of newly received specimens collected from Yunnan Province brought another three new species, Meleonoma plicata sp. nov., M. scalprata sp. nov. and M. taeniata sp. nov., which are described and illustrated here. Adding the three Japanese species recently published by Kitajima and Sakamaki (2019), the total number of Meleonoma species has thus increased to 43, with 19 occurring in China. A checklist of the Chinese Meleonoma prior to this study can be found in Yin and Cai (2019).
Little is known about their biology. The larvae of some species, such as M. tamraensis Park, 2016 and M. flavilineata Kitajima & Sakamaki, 2019, are case-bearers. The cases are usually semi-oblong or cylindrical, and built from dead broad leaves and fragments of moss and stems of Poaceae (Kitajima and Sakamaki 2019).
As summarized in Yin and Cai (2019), a well-defined taxonomic position of the genus Meleonoma has not been proposed yet. In the absence of recent phylogenetic research, we continue to follow Kim et al. (2016), tentatively placing Meleonoma in the Xyloryctidae.
Material and methods
The examined specimens were collected from Yunnan Province in southwestern China in 2018. The descriptive terminology of the anatomical structures generally follows Wang (2006a) and Kristensen (2003). Photographs of adults were taken using a Canon EOS 6D Mark II camera plus an EF 100 mm f/2.8L MACRO IS USM lens with the help of EOS Utility 3.10.20 software. Images of genitalia were captured using a Leica DM4 B upright microscope and photomontage was performed with Leica Application Suite X imaging software. All type specimens are deposited in the Morphological Laboratory, Guizhou University of Traditional Chinese Medicine, Guiyang 550025, Guizhou, China.
Taxonomy
Genus Meleonoma Meyrick, 1914Meleonoma Meyrick, 1914: 255. Type species: Cryptolechia stomota Meyrick, 1910a original designation. = Acryptolechia Lvovsky, 2010: 378. Type species: Cryptolechia malacobyrsa Meyrick, 1921. Synonymised by Lvovsky (2015. Diagnosis. See Yin and Cai (2019: 80). Diagnosis. This new species, M. foliiformis, M. malacobyrsa and M. scalprata share many characters in both appearance and male genitalia, which implies that these four species might belong to the same lineage within Meleonoma. In appearance, they all have a relatively large wingspan; similar composition of body color (primarily yellow and brown); similar pattern of coloration of forewings (mostly yellow, more or less mixed with brown near base, a brown fascia from costa obliquely to slightly before tornus, a somewhat triangular brown patch at apex). In male genitalia, they all share the following characters: valva large and broad with median surface densely covered with long hairs; sacculus broad, nearly triangular or trapezoid with various sclerotized processes. However, M. plicata can be easily distinguished from the others by the following character combination: forewing with basal 1/3 densely mixed with blackish brown speckles; dorsal margin of valva with a small finger-shaped process at distal 1/6, ventral margin smooth, without any process; dorsal margin of sacculus with a large fingerlike process at end; phallus with an oblique portion in distal 1/3 heavily wrinkled and covered with numerous tiny spines.
Key to Meleonoma species from China
Description. Head: vertex and front pale gray, mixed with yellow bilaterally; labial palpus long and recurved, extending well beyond vertex, with smooth scales, yellow, segment 1 mixed with dark brown on outer surface, segment 2 blackish brown distally and extending to middle of ventral margin; segment 3 about 3/4 length of segment 2; antenna with scape blackish brown on dorsal surface and yellow on ventral surface, with flagellum ringed, alternately blackish brown and yellow, except almost pure pale yellow on ventral surface of basal half flagellomeres; scales of proboscis yellow.
Thorax: tegula and mesonotum blackish brown mixed with yellow; legs whitish yellow, tibiae and tarsi scattered with blackish brown speckles on outside. Forewing (Fig. 1): length 5.8 mm (N = 1), about 3.6 × as long as wide, yellow, basal 1/3 quite densely mixed with blackish brown speckles; a blackish brown fascia extending from basal 3/5 of costa obliquely to slightly before tornus, with inner margin slightly arched outward, outer margin somewhat serrated irregularly; cell with two dim black dots, one set at middle, other at middle of fold; apex forming a somewhat triangular patch, blackish brown, mixed with yellow along apex and termen; other yellow parts sparsely scattered with blackish brown scales; cilia yellow except blackish brown on tornus; ventral surface yellowish brown. Hindwing ( Fig. 1): translucent grayish brown, gradually darkening towards apex; cilia grayish.
Male genitalia (Fig. 4): uncus with base short, slightly dilated bilaterally, with other part quite long and slender, slightly curved, apex acute; gnathos mostly membranous, with lateral arms arched outward; tegumen near bell-shape, lateral arms about same width, posterior margin slightly concave at middle, anterior margin shallowly concave into parenthesis-shape; valva gradually widening to middle from a narrow base, with ventral margin broadly arcuate in distal half into rounded apex, median surface densely covered with long hairs; costa nearly straight, strongly sclerotized except only weakly so in distal 1/6, with a small finger-shaped process protruded outward at distal 1/6; transtilla short and weakly sclerotized, covered with rows of long hairs, protruded forward medially; sacculus broad, nearly triangular, with basal 2/3 of dorsal margin joined with valva, a large fingerlike process at end of dorsal margin, distal half of ventral margin slightly serrated, somewhat protruded, densely covered with long hairs and as well on central area of sacculus; saccus funnel-shaped narrowly rounded at apex; juxta arcuate; phallus moderately sclerotized, cigar-shaped, with an oblique portion in distal 1/3 heavily wrinkled and covered with numerous tiny spines.
Biology.
Nothing is known about the larva. The adult was collected at night in May. Distribution. Known only from the type locality (Southwest China: Yunnan Province).
Etymology. The specific name is derived from the Latin adjective plicatus (wrinkled, folded), referring to the heavily wrinkled distal part of the phallus in male genitalia. Diagnosis. This new species belongs to the lineage comprising M. foliiformis, M. malacobyrsa and M. plicata. The new species can be easily distinguished from the others by the following combination of characters: forewing with basal half mixed with blackish brown speckles; both dorsal and ventral margins of valva smooth, without any process; dorsal margin of sacculus with an inconspicuous beak-shaped process at end; phallus with one rodlike sclerite originating from middle and extending to apex.
Meleonoma scalprata
Description. Head: vertex pale gray, mixed with yellow bilaterally, front pale yellowish gray; labial palpus long and recurved, extending well beyond vertex, with smooth scales, yellow, segment 1 mixed with blackish brown on outer surface, segment 2 blackish brown distally and extending vaguely to middle of ventral margin; segment 3 about half length of segment 2; antenna with scape blackish brown on dorsal surface and yellow on ventral surface, with flagellum ringed, alternately blackish brown and yellow, except almost pure yellow on ventral surface of about basal half flagellomeres; scales of proboscis pale yellow.
Thorax: tegula yellow, very sparsely mixed with blackish brown laterally; mesonotum yellow, mixed with blackish brown and more strongly so on posterior half; legs whitish yellow, tibiae and tarsi scattered with blackish brown speckles on outside. Fore-
Male genitalia (Fig. 5): uncus with base short, dilated bilaterally into inverted Tshape, with other part quite long and slender, rodlike, apex acute; gnathos mostly membranous, with lateral arms slightly curved, a bit more sclerotized in basal half than distal half; tegumen nearly inverted V-shaped, lateral arms gradually narrowed to apex, posterior margin slightly concave at middle, anterior margin relatively shallowly concave into parenthesis-shape; valva somewhat in shape of table knife, gradually widening to basal 1/4 from a narrow base, with ventral margin arcuate in distal 1/5 into rounded apex, median surface densely covered with long hairs; costa strongly sclerotized, nearly straight, scattered with long hairs; transtilla covered with rows of long hairs, protruded forward medially, distal portion rounded; sacculus broad, nearly triangular, with basal half of dorsal margin joined with valva, dorsal margin with an inconspicuous beak-shaped process at end, ventral margin slightly more sclerotized, with a very shallow arcuate emargination from about middle to distal 1/4, with long hairs covering median portion and as well as central area of sacculus; saccus funnel-shaped, narrowly rounded at apex; juxta widely U-shaped; phallus moderately sclerotized, nearly cylindrical in shape, narrower in basal 1/3, with one rodlike sclerite originating from middle and extending to apex.
Biology. Nothing is known about the larva. The adults were collected at night in May. Distribution. Known only from the type locality (Southwest China: Yunnan Province).
Etymology. The specific name is derived from the Latin adjective scalpratus (knifeshaped), referring to the machete-shaped signum in female genitalia. Diagnosis. This new species is similar to M. torophanes superficially, but it can be distinguished from the latter by having one large earthy yellow V-shaped mark on forewing; uncus triangular; ventral margin of valva smooth, without any spine; sacculus with distal 1/4 sclerotized forming a thickened plate, without any extra process; phallus forming an 8-shaped bandlike structure distally. Whereas the latter has two large light-yellow marks on forewing; uncus lanciform; valva with a short spine ventroapically; sacculus forming a narrow process distally; phallus with a hairbrush-shaped sclerite attached with short spines.
Description. Head: vertex and front pale earthy yellow mixed with dark brown; labial palpus long and recurved, extending well beyond vertex, with smooth scales, pale earthy yellow, outer surface of segment 1 dark brown, of segment 2 dark brown distally and extending vaguely to distal 2/3, of segment 3 slightly tinged with dark brown at middle, inner surface of segment 2 dark brown distally; segment 3 slightly shorter than segment 2; antenna with scape dark brown on dorsal surface and pale earthy yellow on ventral surface, with flagellum alternately dark brown and yellow on dorsal surface, except middle 1/3 flagellomeres almost pure dark brown, ventral surface pale earthy yellow; scales of proboscis pale earthy yellow.
Thorax: tegula and mesonotum dark brown mixed with pale earthy yellow; legs pale earthy yellow, forelegs somewhat segmented with wide dark brown rings, mid and hindlegs with tibiae and tarsi scattered with dark brown speckles on outside. Forewing (Fig. 2): length 5.5 mm (N = 1), about 3.6 × as long as wide, dark brown; cell with three indistinct black dots, one set at middle, one at end and one at middle of fold; a broad somewhat V-shaped mark with two ends extending from about basal 1/2 and 4/5 of costa respectively, and converging slightly before tornus, earthy yellow in color, and sparsely tinged with yellowish brown and dark brown scales; apex and termen narrowly edged with pale earthy yellow; cilia earthy yellow mixed with pale brown; ventral surface light brown. Hindwing (Fig. 2): translucent light grayish brown, gradually darkening towards apex; cilia light grayish brown.
Male genitalia (Fig. 7): uncus membranous, triangular in shape, with long setae on dorsal surface; gnathos absent; tegumen inverted U-shaped, lateral arms long, about same width, posterior margin with a shallow V-shaped notch at middle, anterior margin deeply concave; valva broad, gradually widening to basal 2/5 from a relatively narrow base, with distal 3/5 nearly same width, apex broadly rounded, densely covered with long hairs on median surface, but asetose in an elongate membranous area at center; costa moderately sclerotized, broadly arched forming a shallow notch; transtilla round, narrow at base, swollen distally, asetose, protruded forward medially; sacculus broad, trapezoid, with dorsal margin joined with valva at base, with distal 1/4 strongly sclerotized forming a distinct thickened plate that sparsely covered with long setae on both outer and median surfaces, dorsal and ventral margins nearly parallel; saccus short, funnel-shaped narrowly rounded at apex; juxta bifurcated, weakly joined at base; phallus moderately sclerotized, rodlike, narrow at base, gently thickened to basal 2/3, heavily sclerotized in distal 1/3, forming a bandlike structure similar to Arabic numeral "8" in shape.
Biology.
Nothing is known about the larva. The adult was collected at night in May. Distribution. Known only from the type locality (Southwest China: Yunnan Province).
Etymology. The specific name is derived from the Latin adjective taeniatus (bandlike), referring to the bandlike structure of the phallus in male genitalia. | 2020-01-23T09:07:57.347Z | 2020-01-16T00:00:00.000 | {
"year": 2020,
"sha1": "4274600e0b7fad78203072e1821ecbc4d5e8685f",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/47189/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0cccbf9c9c7d1049c2036cb5910c61bd6b48f82e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257371551 | pes2o/s2orc | v3-fos-license | Regulatory Mechanisms in Biosystems
.
Introduction
There is a growing world-wide demand for natural and healthy food products. As a product of nectar or honeydew processed by honey bees, honey is considered to be both a good food product and one that possesses health benefits. Nowadays, interest in honey is primarily associated with the search for natural ways to strengthen human immunity, which is especially important during the COVID-19 pandemic (Al-Hatamleh et al., 2020), and the promotion of honey as a sugar substitute (Iqbal et al., 2020). The market of beekeeping products is becoming more globalized and, as a result, competition for foreign markets is intensifying. Ukraine is one of the top five global producers and exporters of honey (Dankevych et al., 2018;Fedoriak et al., 2019). To maintain a high level of competitiveness, it is necessary to ensure that the quality of honey products meets both international and national standards. The chemical composition of polyfloral honey depends on many factors, primarily driven by regional cli-matic conditions, plant species and the corresponding pigments in the nectar (carotene, xanthophyll, phenolics, etc.), the honey harvest season, environmental factors and treatment methods applied by beekeepers (Halouzka et al., 2016;Pavlova et al., 2018). In order to examine a large range of variation in parameter values for the multiple factors affecting honey quality, Chernivtsi region in SW Ukraine was chosen as a case study. This region represents a steep gradient between mountainous remote areas with traditional village livelihoods, and lowlands with intensive farming, mainly rapeseed (Brassica napus), soybean (Glycine max), sunflower (Helianthus annuus) crops and apple orchards (Fedoriak et al., 2021). Honey bees, which are currently widespread on the territory of the Chernivtsi region, are mainly hybrids between two subspecies, Apis mellifera carnica and A. m. macedonica (Cherevatov et al., 2019;Cherevatov et al., 2020). Being the smallest region (8,100 km 2 ), it represents 1.3% of the total area of Ukraine. Despite its small size, the honey harvest was 942 tons in 2019. However, there is significant potential for beekeeping and high-quality honey production, due to the unique natural and climatic conditions. The peculiarity of the region is the length of the territory from west to east and the influence of the Carpathian mountain system on the climate. In general, the climate is quite mild and humid, but the complex terrain causes some differences -in the east it is more continental, in the mountains and foothills it is severer. This patchiness in the environmental conditions drives the wide diversity of nectar plants and hence diversification of honey varieties. Moreover, the landscapes function as coupled socioecological systems (Partelow, 2018) that have been dramatically changed by human activities. Therefore, the assessment of physical and chemical indicators of honey quality is an important condition for the development of beekeeping and the prospect of export opportunities in the region.
The aim of the study was to check compliance of honey from the apiaries located in the landscapes with different socioecological conditions with Ukrainian and international quality standards. We also aimed to find out whether there was a statistically significant difference between the physicochemical quality of honey between different socio-ecological conditions. This is an important study as future economic developments in the honey production depend on meeting national and European standards.
Material and methods
Chernivtsi region is made up of three major physiographic units: mountain, foothill and lowland. All three zones differ in natural conditions, vegetation, agricultural land composition, levels of economic development, culture and demographic indicators, forming a steep social and ecological gradient (Fedoriak et al., 2021). The most prominent difference among the major physiographic units refers to the intensity of agriculture. We chose one administrative district for each physiographic unit, repre-senting the 'Traditional village' stratum, the 'Intermediate' stratum, and the 'Intensive agriculture' stratum ( Fig. 1). The 'Traditional village' stratum (Putyla district) is located in the Eastern Carpathian Mountains. This area is dominated by traditionally practised subsistence farming including both growing crops and tending livestock. The main land cover classes are coniferous (spruce, stone pine) and mixed forests, and natural meadows located above the tree line. The 'Intermediate' stratum (Storozhynets district) is situated further to the east in the Carpathian Mountain foothills and combines features of the neighbouring two strata. Agricultural production and forestry are both major kinds of land use here while subsistence farming persists in the large villages beside modern agriculture with diverse crops. The 'Intensive agriculture' stratum (Khotyn district) in the east is represented mainly by agricultural land managed by big international agricultural businesses and private orchards. The proportion of crop land cover varied from 17% in 'Traditional village' stratum to 79% 'Intensive agriculture' stratum (Fedoriak et al., 2021).
Sixty-five polyfloral honey samples were received directly from beekeepers from three districts of Chernivtsi region (26 samples were collected in the 'Traditional village' stratum, 17 in the 'Intermediate' stratum and 22 in the 'Intensive agriculture' stratum). The samples were stored at 16 degrees C under dark conditions.
A comprehensive analysis of the honey was performed. This analysis included normalized quality indicators, pH and profile of carbohydrates. Honey quality assessment was performed by means of physicochemical methods of analysis. The mass fraction of reducing sugars and moisture content, diastase activity, acidity, electrical conductivity, hydroxymethylfurfural (HMF) and proline content were determined by conventional methods in accordance with the Ukrainian National Standards (national standard). Moisture content was determined refractometrically. A sample of honey was placed in a test tube, heated in a water bath to 60 degrees until complete dissolution of the crystals. Then the tube was cooled to room temperature, a drop of honey was applied onto the refractometer prism (type RHB 90 ATC, China, 2019) and the refractive index was measured, which was converted into a mass fraction of water in honey.
The content of reducing sugars was determined spectrophotometrically. Preparation of honey solution: 2 g of the honey sample was dissolved in 20-30 ml of distilled water and the water was added up to 100 mL (solution A). Distilled water was added to 10 mL of solution A to reach 100 mL (solution B).
Determination of the mass fraction of reducing sugars: 20 mL of 1% potassium hexacyanoferrate (III) solution, 5 mL of 2.5 mol/dm 3 sodium hydroxide solution and 10 mL of solution B were added together to a 100 mL conical flask. The mixture was heated to boiling, then boiled for 1 minute and immediately cooled to room temperature. The optical density of the solution was measured on a Cary 60 UV-Vis Spectrophotometer, (USA, 2017) in a 10 mm cuvette at 440 nm wavelength against distilled water. The amount of reducing sugars (mg) was found by interpolation of optical density value of the samples with a calibration curve. The mass fraction of reducing sugars per the dry matter was calculated as relation of the amount of reducing sugars taken according to the calibration curve (mg) to the honey sample mass (g). The carbohydrate profile was determined by the method of high-performance liquid chromatography coupled with Corona Veo RS detector (USA, 2017) by the scientists of the Faculty of Science of Palacký University Olomouc (The Czech Republic). In brief: the isocratic elution was performed on the column Luna Omega Sugar 3 um 100Å, 150 x 3.0 mm at 30 °C and 1 mL/min of 80% acetonitrile. The Corona Veo RS detector was tuned to 35 °C and frequency 25 Hz. The run time was 7 mins.
Diastase activity was determined spectrophotometrically and results were presented in Goethe units.
Aqueous honey solution was prepared by dissolving 5 g of honey in 10 mL distilled water, then added to a 50 mL volumetric flask and made up to the mark with distilled water. 14 mL of the mixed reagent (8 parts of 0.25% starch solution; 5 parts of acetate buffer pH = 5; 1 part of 0.1 mol/dm 3 sodium chloride) were poured into three test tubes. The tubes were closed with stoppers and placed in a water bath at 40 degrees. Then 1 mL of honey solution was poured into two test tubes, and 1 mL of distilled water was added to the third tube (control experiment). The content of the tubes was thoroughly mixed and boiled in a water bath for 15 min at 40 degrees. Then the samples were quickly cooled to 20 degrees using ice. 2 mL of mixed reagent from the test tubes was mixed with 40 mL distilled water, 1 mL of 0.25% iodine solution in three volumetric 50 mL flasks. The content of the flasks was made up to the mark with distilled water, thoroughly mixed, and kept in a water bath for 10 min at 20 degrees.
The optical density of the samples was measured on a Cary 60 UV-Vis Spectrophotometer (USA, 2017), 10 mm cuvette, wavelength 590 nm. The diastase activity of honey was defined as the amount of enzyme that can convert 0.01 g of starch in 1 hour at 40 ºC under test conditions. The results were presented in Goethe (or Shade) units per 1 g of dry matter.
Free acidity was determined using a titration method with a solution of sodium hydroxide pH 8.3. A honey sample 10.0 ± 0.01 g was diluted with distilled water (75 mL) and titrated with a 0.1 mol/dm 3 NaOH solution.
Electrical conductivity was measured by a digital benchtop multiparameter instrument, type PC 52+ DHS XS instruments (Italy, 2019) in a 20% (w/v) honey solution diluted with distilled water.
Hydroxymethylfurfural (HMF) content was determined by the Winkler spectrophotometric method on a Cary 60 UV-Vis Spectrophotometer (USA, 2017). It is based on the interaction of HMF, para-toluidine and barbituric acid with the formation of a red-coloured complex. The optical density of solution was measured at a wavelength 550 nm with fixation of the maximum optical density value for 2-6 min since the time of adding barbituric acid. The results were presented in mg HMF/kg honey.
Content of proline was determined spectrophotometrically by measuring optical density of the proline with a ninhydrin coloured complex at a wavelength 510 nm. 5.0 ± 0.01 g of the honey sample was diluted in 100 mL distilled water. The content of proline (P), mg/kg was determined as relation of honey solution optical density multiplied by dilution factor to optical density of the standard solution of proline (0.0008 g/25 mL of solution).
Hydrogen index was registered in a honey solution by a digital benchtop multiparameter instrument, type PC 52+ DHS XS instruments (Italy, 2019) using pH electrodes constantly stirring on a magnetic stirrer.
The obtained experimental results were compared with the norms of the national standard DSTU 4497:2005 and the EU Directive relating to honey and Codex Alimentarius Honey Standard. The Directive 2001/110/EC and Codex Alimentarius Honey Standard state the general regulations regarding the composition and content of various types of honey. According to the Ukrainian standard, honey is divided into two basic classes i.e. Extra Class and First Class (Table 1). Statistical analysis of the results was performed using the software R Studio, Version R 4.1.2 (USA, 2022). Normality of data distribution was verified with the Shapiro-Wilks test. All variables failed to meet parametric test assumptions. Then variables were tested by methods of nonparametric statistics (Kruskal-Wallis ANOVA) for non-normal distributed data. Post hoc comparisons were conducted using Kruskal-Wallis and pairwise Wilcoxon rank sum tests with a Bonferroni adjustment.
Results
Twelve physicochemical indicators of honey quality were analyzed in the sixty-five polyfloral honey samples collected from three strata of Chernivtsi region. The obtained results on reducing sugars' content, account of fructose, glucose, and fructose/glucose ratio in the examined honey samples are displayed in Figure 2.
The content (%) of the reducing sugars in the honey samples varied within the range of 66.0% to 97.6% (Fig. 2). It showed a significant difference between the 'Intensive agriculture' and 'Intermediate' strata of Chernivtsi region. The mean of reducing sugars content for 'Traditional village' stratum was found to be 79.2% ranging from 67.3% to 90.4%. The variation in the 'Intermediate' stratum extended between 66.0% and 89.2% with mean 76.6%. In the 'Intensive agriculture' it varied from 72.2% to 97.7% with a mean of 82.8%.
The studied honey samples had the following sugar profile: fructose, glucose, sucrose, maltose, trehalose and melezitose. The investigated samples of honey did not differ significantly in the content of glucose. The fructose content varied from 342 to 549 mg/g, and the glucose content variation ranged within 283 to 517 mg/g (Fig. 2). A higher average content of fructose and glucose are predominant in the honey samples from 'Intensive agriculture' apiaries (458 and 393 mg/g, respectively) and 'Traditional village' apiaries (438 and 375 mg/g) (Fig. 2). All the studied samples of honey displayed higher fructose content than glucose content. The average fructose/glucose ratio was 1.2 for honey samples from the three studied districts. The minimum ratio of monosaccharides was noted in the samples of honey from 'Intermediate' (1.05 to 1.30) and it increased in the samples from 'Intensive agriculture' (1.05-1.41) and 'Traditional village' (1.09-1.47) districts.
Sucrose was only detected in two samples of honey from the 'Intermediate' strata with a content of 5.3 and 11.2 mg/g. The sugar profile that we have been able to determine displayed an absence of raffinose in all samples of honey. Besides fructose, glucose and sucrose, some oligosaccharides, such as maltose, trehalose and melezitose, were reported to be found in the honey. Some samples of honey contained maltose in small quantities. Its content varied from 0.2 to 31.3 mg/g, in 14 samples of honey from the 'Traditional village' stratum, in the 11 samples from the 'Intermediate' stratum the range was 0.2 to 12.7 mg/g and 4 samples from the 'Intensive agriculture' stratum contained 1.9 to 25.0 mg/g of maltose respectively (Table 2). Trehalose was only identified in two samples from the 'Intermediate' strata and one from the 'Traditional village'. Its content was 5.2, 7.2 and 3.9 mg/g, respectively. Melezitose was detected in the honey samples from 'Traditional village' (21 samples) and 'Intermediate' (5 samples) districts ( Table 2). The content of melezitose varied within a broad range, in particular 0.7-107.1 mg/g was registered in the samples from 'Traditional village' and 7.4 to 112.3 mg/g in the samples from 'Intensive agriculture'. Small amounts of trehalose were detected in only three samples from the 'Traditional village' (3.9 mg/g) and the 'Intensive agriculture' (5.2 and 7.2 mg/g) districts. Consequently, the presence of melezitose may indicate that some honeydew impurities are contained in the honey samples from 'Traditional village' and 'Intensive agriculture'.
The total variability of HMF content in the studied honey samples from 65 apiaries from the three districts of Chernivtsi regions ranges from 0.19 to 30.8 mg/kg (Fig. 3). The limits of HMF content fluctuation in samples of honey from the 'Traditional village' stratum (0.48 to 30.82) are higher compared to others (0.29-11.86 and 0.19-17.33 for the samples from the 'Intensive agriculture' and 'Intermediate' strata respectively), but differences were not significant.
Variability of diastase activity in honey samples from Chernivtsi region ranges from 13.9 to 63.5 Goethe units (Fig. 3). It should be noted that the maximum range (from 13.29 to 63.49 Goethe units) of diastase activity indicators was registered in the 'Traditional village' stratum. The average value of this indicator is also the highest in this area (37.00 Goethe units). No statistically significance in the diastase activity was found between strata.
Our studies have revealed significant variability in the electrical conductivity of honey samples (Fig. 3). Statistical differences were observed between the 'Intensive agriculture' stratum and the 'Traditional village' stratum and between the 'Intermediate' stratum and the 'Intensive agriculture' stratum according to Wilcoxon W-test, at P < 0.05 for electrical conductivity. It varied in range from 0.14 mS/cm in 'Intensive agriculture' to 1.2 mS/cm in 'Intermediate'.
The minimum moisture content was found to be 16.2% (in the 'Traditional village' and 'Intermediate' stratum), and 22.2% was the maximum (in the 'Intermediate' stratum).
Our studies have shown that free acidity of the samples varied within the range of 13.5 to 58.0 meq/kg (Fig. 3). The obtained results showed no statistical significance between strata.
According to the results of our studies, the pH of honey samples ranged from 3.34 to 4.56 (Fig. 3). The average values of the hydrogen index were 3.55, 3.85 and 3.91 for honey samples from the 'Intensive agriculture', the 'Intermediate' and the 'Traditional village' strata, respectively. A significantly higher concentration of hydrogen ions in the honey solutions from the 'Intensive agriculture' stratum was demonstrated in accordance with Wilcoxon W-test.
Proline content variability for the three studied geographical areas ranged from 82.3 to 1201.2 mg/kg (Fig. 3). The content of proline in the samples of honey from the 'Intermediate' stratum significantly differed from the 'Intensive agriculture' and 'Traditional village' strata in accordance with the Wilcoxon W-test. The highest average value (725.0 mg/kg) of proline content was observed in the samples from the 'Traditional village' stratum, which has the least anthropogenic effect compared to other districts we studied. The lowest average value (300.0 mg/kg) was found in the samples from the 'Intermediate' stratum.
Discussion
The largest part of the dry matter in honey consists of carbohydrates, represented by mono-, di-and trisaccharides. Glucose and fructose are the principal constituents of honey carbohydrates (da Silva et al., 2016). These components determine the basic properties of honey such as sweetness, nutritional value, granulation tendency, crystallization and hygroscopicity. The glucose and fructose content of honey are derived mainly from plant nectar; only a small amount is formed from sucrose and is accumulated in the process of its maturation under the influence of enzymes and organic acids contained in honey. The content of glucose and fructose is known as an indicator of honey naturalness (Kornienko et al., 2017). The sucrose content in natural honey is insignificant and may decrease in the course of storage due to the process of self-inversion.
In accordance with the requirements of international standards, in particular the European Directive 2001/110/EC and the standard of the Codex Alimentarius Commission, the share of reducing sugars in natural honey must be not less than 60%. At the same time, according to the national standard DSTU 4497:2005, the mass share of reducing sugars content should be at least 80% for Extra Class honey and at least 70% for First Class honey according with DSTU 4497:2005.
All the honey samples from private apiaries located on the territory of the three study districts of Chernivtsi region were found to comply with the international standards in terms of reducing sugars' content (Table 3). Table 3).
The sweetness of honey depends on the concentration of constituent sugars and their origin. The sweetest honey has high fructose concentration. The content of glucose and fructose in the dry matter constitutes approximately 70-80% of all sugars contained in flower honey and 55-65% in honeydew honey (Kowalski et al., 2013). The content of invert sugars in flower honey samples from 18 different places in Bosnia and Herzegovina ranged from 64.8% to 85.0% (Prazina & Mahmutović, 2017). In high-quality honey, the glucose content is usually lower (about 30-35%) than fructose (about 35-40%). Some physical properties of honey depend on their ratio. The higher amount of glucose in honey leads to its faster crystallization, while the fructose content influences its taste, making it sweeter and more hygroscopic (da Silva et al., 2016).
The ratio of fructose and glucose (F/G) in the prevailing majority of cases exceeds 1.0 and this indicator can be used to identify monofloral honey. Data from different sources state that acacia and chestnut types of honey are rich in fructose (F/G 1.5-1.7), while oilseed rape and dandelion types of honey demonstrate a higher glucose content (Prazina & Mahmutović, 2017). Also, if we compare acacia honey to buckwheat and linden honey we see that higher glucose content compared to fructose was observed in the latter ones (Kowalski et al., 2013). As for the information on the ratio of fructose and glucose content in the honeydew honey, it is quite ambiguous. Thus, Primorac et al. (2009) noted a higher content of fructose rather than glucose (32.4:31.0%) in the honeydew honey samples from Croatia, while the corresponding honey samples from Macedonia demonstrated the opposite results of 36.8% glucose and 33.6% fructose.
Apart from monosaccharides, flower honey contains a number of disaccharides, among which sucrose, maltose, trehalose and turanose are the main ones (Bogdanov et al., 2008). Their content ranges between 3.29-18.6% and oligosaccharides content is noted as 0.13-10.0% (Kowalski et al., 2013). The most common disaccharide is sucrose. According to the current European requirements for honey quality, the sucrose content in all types of flower honey is set at not more than 5 g/100 g, except for some monofloral types of honey (Banskia, Citrus, Hedysarum, Medicago and Robinia) with the sucrose content of up to 10 g/100 g and Lavandula honey containing up to 15 g/100 g of sucrose (EU Council, 2002;Codex Alimentarius Commission, 2001). The requirements of the Ukrainian national standard DSTU provide for a sucrose content of not more than 3.5% (for the Extra Class) and not more than 6% (for the First Class) (DSTU, 2007). The absence of sucrose displayed for the all examined samples of honey is probably a result of the correct maturation of honey (Kowalski et al., 2013).
Maltose disaccharide contributes to honey sweetness and its content can vary within the range of 2.8 to 7.5% (Kowalski et al., 2013). The presence of various trisaccharides such as melezitose, maltotriose, and raffinose is an indicator of the honeydew content in the samples (Bogdanov et al., 1999). The trisaccharide melezitose content that is usually not spotted in flower honey is contained in a significant proportion in honeydew honey. Melezitose is contained in honey produced by bees from honeydew of both deciduous and coniferous plants (Rybak-Chmielewska et al., 2013). Honeydew often gets into flower honey in various quantities. Numerous authors are looking for the set parameters by which honey can be quickly identified. Bogdanov & Gfeller (2006) used discriminant analysis to classify flower, honeydew and mixed honey types and it was noted that melezitose, as the only variable, had a high discriminant power of 96% for the classification of honey. Honeydew honey contains more oligosaccharides of melezitose and raffinose compared to flower honey. According to the EU Honey Standard, honeydew honey is produced by bees from honeydew (secretion), which is a sweet, transparent and viscous substance of animal origin (secreted by insects) and honeydew drops (juice from the leaves and stems of plants) (EU Council, 2001). Honeydew contains a more complex range of sugars than nectar therefore honeydew honey has a much lower content of disaccharides and more oligosaccharides than flower honey. This is closely related to the fact that honeydew contains enzymes (which are absent in nectar) secreted by the salivary glands and intestines of insects (Victorita et al., 2008).
Hydroxymethylfurfural (HMF) is a cyclic aldehyde that is formed in an acidic environment when honey is heated and exposed to the Maillard reaction (a non-enzymatic formation of coloured melanoidins) from reducing sugars. However, the duration and storage conditions of honey may cause the formation of HMF. It was demonstrated by Shapla et al. (2018), that honey stored at low temperatures had a low content of HMF whilst honey stored at high and medium temperatures had a high content of HMF. The studies of Alias et al. (2018) revealed that HMF production increases proportionally to the increase of temperature and duration of heating. Thus, the content of HMF is a parameter that indicates the freshness of honey since it is usually either not registered or registered in small quantities in fresh honey. HMF tends to be formed faster from ketohexoses, such as fructose compared to glucose (Shapla et al., 2018).
According to the national standard, the HMF content in the "Extra Class" honey and the "First Class" honey must not exceed 10 and 25 mg/kg respectively. According to the International standards (Codex Alimentarius Commission, 2001;EU Council, 2002), the HMF content must be no more than 40 mg/kg. Thus, all samples of honey complied with the international quality standards (Table 3). It must also be stated that one sample from the 'Traditional village' stratum did not meet the national standard (not more than 25 mg/kg).
Honey diastase is an enzyme that is formed from flower nectar with the secretions of the honey bees salivary glands. However, it remains unclear why some honey types of different botanical origin have different diastase activity whereas some other types of honey (Erica, Robinia, Rosmarinus, Erica, Taraxacum, Arbutus, Citrus) demonstrate consistently low activity of this enzyme (3-5 Goethe units) (Thrasyvoulou et al., 2018). The different diastase activity is considered to be caused by a number of factors such as the period of nectar collection, the efficiency of nectar processing by honey bees, the age of the honey bees, the physiological state of the honey bee colony and others (da Silva et al., 2016;Gismondi et al., 2018).
The diastase activity measurement is used to assess the quality parameters of honey, in particular its freshness. The studies of Isopescu et al. (2014) have shown that diastase activity is an extremely variable indicator that depends on a number of uncontrolled external and internal factors. Consequently, its use as an indicator of the freshness of honey is extremely controversial.
Honey is often heated when it crystallizes to improve its texture, viscosity and product appearance. It is known that the loss of valuable properties of honey is proportional to temperature and heating time (Cozmuta et al., 2011). Control parameters such as the diastasis activity and HMF serve as indicators of intensity and time of honey heat treatment (Ramirez- Cervantes et al., 2000).
According to the Honey Quality Requirements of the EU Council Directive, the diastase activity must not be less than or equal to 8 Schade units, expressed as the diastase number (DN). DN in the Schade scale, which corresponds to the Goethe units number, is defined as 1 g of starch per 100 g of honey hydrolysed for 1 hour at the temperature 40 ºC.
DN exceeds the value of 25 Goethe units whereas HMF is either not registered or has low value in the freshly collected samples of honey. In the process of honey heating or long-term storage the diastase activity decreases and HMF, on the contrary, increases. If the diastasis number is less than 8 Goethe units or HMF is more than 40 mg/kg, the quality of honey is considered to be unsatisfactory and honey is classified as baking honey (Thrasyvoulou et al., 2018). The obtained results of diastase activity in all samples from the Chernivtsi region are consistent with the studies of other authors who also highlight the variability of diastase activity from 13.9 to 50 Goethe units (Isopescu et al., 2014).
We found ( Table 3) that all of the studied honey samples complied with the national (not less than 10 Goethe units) and international (not less than 8 Goethe units) standards of quality. It should be noted that the maximum range of diastase activity indicators was registered in the 'Traditional village' stratum (Fig. 2). The average value of this indicator is also the highest in this area. In our opinion, this can be explained by the more diverse foraging resources for bees and the larger number of honey bees out in the meadows at this time period (Bálint et al., 2011). Moreover, it should be taken in consideration that this district undergoes less anthropogenic influence in comparison with other two strata.
Especially high levels of diastase activity are known for monofloral honey Thymus (Nousias et al., 2017). There are several species of Thymus widely represented in the 'Traditional village' stratum (Nachychko & Honcharenko, 2017). Therefore, this could be the reason that honey samples from this area have a higher average value of diastase activity compared to others. The highest value of this indicator (63.49 Goethe units) was also registered in these honey samples.
Electrical conductivity (EC) depends on the content of mineral salts, organic acids and proteins (Yücel & Sultanog, 2013) and it indicates the origin of honey (Karabagias et al., 2014). A value of ≤ 0.8 mSm/cm indicates the floral origin of honey, a bigger value of EC indicates that honey is of honeydew origin. Though there are some exceptions in particular, international standards state that such types of honey as Persea americana (avocado honey), Polygonum aviculare (knot weed honey), Paliurus spina-christi (Jerusalem thorn honey), Gossypium sp. (cotton honey), have electrical conductivity at value > 0.8 mS/cm. Our studies have revealed significant variability in the electrical conductivity of the honey samples. Although the average values of the study samples do not exceed the permissible limits stated in the national standard DSTU, there are samples of honey with high values of electrical conductivity that do not comply with the international standards (≤ 0.8 mS/cm). Most likely, these samples can be classified as honeydew honey or contain a special composition of mineral salts, organic acids and proteins that can cause high values of electrical conductivity (Yücel & Sultanog, 2013).
Moisture content in honey plays an important role in determining the general characteristics of honey and assessing its quality. Moisture content in honey depends on a number of factors: climatic conditions, flower composition, harvesting conditions etc. (Karabagias et al., 2014). Mature honey contains not more than 18% of moisture; international standards allow moisture content up to 20%, except for honey from heather (Calluna vulgaris) where moisture content level is allowed up to 23% (Thrasyvoulou et al., 2018). The higher the moisture content in honey, the greater is the probability of fermentation processes resulting in its colour and taste changes. Most samples of honey of different botanical origin have moisture content of about 18%. However, some monofloral types of honey (e.g. Erica arborea, E. manipuliflora, E. verticillata), clover honey (Trifolium spp.), Arbutus unedo, Polygonum aviculare naturally contain 20% of moisture (Thrasyvoulou et al., 2018). According to other studies, the moisture content in honey samples can range from 10.5% to 20.5% (Karabagias et al., 2014).
All the investigated samples of honey met the criteria of the national standard DSTU for this parameter (Table 2). However, 4% of honey samples from the 'Traditional village' stratum, 14% from the 'Intensive agriculture' stratum and 16% from the 'Intermediate' stratum did not meet the international standards requirements.
Honey contains organic (about 0.3%) and inorganic (0.03%) acids, so it has an acidic environment. There are formic, acetic, lactic, amber, malic, grape, citric, pyruvic, gluconic and some other organic acids in its content. As for the inorganic acids, phosphoric and hydrochloric acids can be registered in honey. Acids can be found in honey in free and bound states and get there from nectar, honeydew, pollen grains, bee secretions. They can also be synthesized in the process of enzymatic decomposition and oxidation of sugars (da Silva et al., 2016).
Free acidity is a parameter that is associated with deterioration in honey quality and it is characterized by the presence of organic acids in equilibrium with lactone, internal esters and some inorganic ions such as phosphates, sulphates and chlorides (da Silva et al., 2016).
It is important to note that honey has natural acidity regardless of its geographical origin. The high value of free acidity indicates the enzymatic conversion of sugars into organic acids and it is an indicator of the honey freshness. Сomplex transformations underlying the process of storage are known (Acquarone et al., 2007). These transformations increase the content of free acids and correspondingly reduce the value of the hydrogen index. The changes occur more intensively after 12 months of honey storage.
The increase in acidity also occurs during the fermentation of honey. Honey sugars are converted into volatile acids (С 2 -С 12 ) by yeast. These volatile acids impair the organoleptic properties of honey, in particular its colour and taste (da Silva et al., 2016).
The exceedance of the free acidity index compared to the national standards for the First-Class honey and international standards was revealed in the 'Intermediate' stratum -12% of all samples, in the 'Intensive agriculture' stratum -9.5%, in 'Traditional village' stratum -4%. The elevated values of free acidity indicate the fermentation of sugars into organic acids (Table 3).
The share of samples that corresponded to the Extra Class of honey according to the national standards DSTU ranged from 53% for the 'Intermediate' stratum to 79% and 81% for 'Traditional village' and the 'Intensive agriculture' stratum, respectively (Table 3). Significantly, high acidity of honey in the 'Intermediate' stratum indicates the fermentation of sugars into organic acids. This may be due to the peculiarities of the geographical region or the content of honeydew impurities that cause high content of organic acids.
The hydrogen index characterizes the activity or concentration of hydrogen ions in honey solutions. Although the pH limit is not currently set by the Regulatory Committees, the allowable values of hydrogen are 3. 2-4.5 (da Silva et al., 2016). The obtained results showed correspondence of the honey pH in all the studied strata to the permissible level. The pH value of honey is closely related to the existence and activity of microorganisms. The optimal value for most organisms is from 7.2 to 7.4, therefore low pH level will prevent microbiological spoilage of honey (Ratiu et al., 2020). The pH value may be an indicator of fake honey. Thus, adding of high-fructose corn syrup to honey significantly increases the pH value.
Proline is a free amino acid that gets into honey from the nectar of flowers, pollen grains and is produced in large quantities by bees.
The content of proline in natural honey is 45% to 85% of the total number of amino acids (Postoienko et al., 2019). Therefore, this indicator is used as a criterion for the naturalness and maturity of this product. If honey is collected being immature or containing sugar blend, the proline content will be very low. Proline content and diastasis activity are indicators that stand for the enzymatic activity of honey according to the current regulations and standards. By determining the proline content, it is easy to assess the quality of honey of varied botanical origin (Adamchuk et al., 2019). Moisture content affects the overall amount of proline (Lazareva & Postoienko, 2016).
In accordance with the requirements of the national standard DSTU 4497:2005, the content of proline must be not less than 300 mg/kg for all types of honey of the Extra Class and the First Class and not less than 200 mg/kg for acacia honey. Codex Alimentarius CODEX STAN12-1981 and Council Directive 2001/110/EC do not regulate the content of proline. However, according to the agreement of the German Beekeepers Association, the content of proline in natural honey must be not less than 180 mg/kg. High quality honey can contain up to 550 mg/kg of proline (Lazareva, 2015).
Although the average values of proline for the study samples of honey met the standards of the current national standard DSTU (300.3-725.0 mg/kg), there were samples found in each of the study districts that had a proline content < 300 mg/kg.
The content of proline differed in the samples of honey from the study districts according to the Wilcoxon W-test. Higher proline content was observed in the samples from the 'Traditional village' stratum, which experiences the lowest anthropogenic effect compared to the other districts we studied. The share of samples that do not comply with the national standard DSTU 4497:2005 increased in the following order: the 'Traditional village' stratum (3.9%) → the 'Intensive agriculture' stratum (9%) → the 'Intermediate' stratum (47.1%, Table 3).
Conclusions
For physical and chemical parameters, the study shows that the honey from the Chernivtsi region is of high quality. The share of reducing sugars is ~ 80%, which indicates its nutritional value. The study samples of honey have a low pH level (~ 3.7) and a high content of proline (~ 513 mg/kg). Separate samples of honey besides fructose and glucose also included oligosaccharides, such as maltose, trehalose and melezitose. No signs of sucrose were detected in most parts of analyzed honey, excluding two samples from the 'Intermediate' stratum. The samples of honey from apiaries in the 'Traditional village' and the 'Intensive agriculture' strata complied with international and national quality standards. This indicates their better quality in comparison with the samples from the 'Intermediate' stratum. A total of 8-10% of samples deviated from the norms of Ukrainian and international standards. Therefore, encouraging continuous and up to date monitoring of honey is relevant. | 2023-03-07T16:04:08.026Z | 2022-11-03T00:00:00.000 | {
"year": 2022,
"sha1": "184c8480ed9593ea24dd49ce74178c3ba77ad284",
"oa_license": "CCBY",
"oa_url": "https://medicine.dp.ua/index.php/med/article/download/834/847",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "539a4299bb7807e30e5ec1edad9fc4b9bee73b57",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
249681390 | pes2o/s2orc | v3-fos-license | Dietary Exposure to Essential and Non-essential Elements During Infants’ First Year of Life in the New Hampshire Birth Cohort Study
Even the low levels of non-essential elements exposure common in the US may have health consequences especially early in life. However, little is known about the infant’s dynamic exposure to essential and non-essential elements. This study aims to evaluate exposure to essential and non-essential elements during infants’ first year of life and to explore the association between the exposure and rice consumption. Paired urine samples from infants enrolled in the New Hampshire Birth Cohort Study (NHBCS) were collected at approximately 6 weeks (exclusively breastfed) and at 1 year of age after weaning (n = 187). A further independent subgroup of NHBCS infants with details about rice consumption at 1 year of age also was included (n = 147). Urinary concentrations of 8 essential (Co, Cr, Cu, Fe, Mn, Mo, Ni, and Se) and 9 non-essential (Al, As, Cd, Hg, Pb, Sb, Sn, V, and U) elements were determined as a measure of exposure. Several essential (Co, Fe, Mo, Ni, and Se) and non-essential (Al, As, Cd, Hg, Pb, Sb, Sn, and V) elements had higher concentrations at 1 year than at 6 weeks of age. The highest increases were for urinary As and Mo with median concentrations of 0.20 and 1.02 µg/L at 6 weeks and 2.31 and 45.36 µg/L at 1 year of age, respectively. At 1 year of age, As and Mo urine concentrations were related to rice consumption. Further efforts are necessary to minimize exposure to non-essential elements while retaining essential elements to protect and promote children’s health. Supplementary Information The online version contains supplementary material available at 10.1007/s12403-022-00489-x.
Introduction
Exposure to non-essential elements such as arsenic (As), lead (Pb), mercury (Hg), and cadmium (Cd) has become a significant global health issue owing to their frequency and toxic effects on human health (ATSDR 2019a). This concern is particularly relevant for infants and young children for whom non-essential element exposures, even at the low levels common in the United States of America (US) and elsewhere, may have health consequences (Farzan et al. 2016;Nadeau et al. 2014;Vahter et al. 2020;Wasserman et al. 2014).
There is a growing body of evidence reporting high levels of non-essential elements in foods for infants and young children (Arcella et al. 2021;EFSA 2009a;Karagas et al. 2016;Signes-Pastor et al. 2016), which supports that food intake is a source of essential but also non-essential elements (FDA 2020a). The US Subcommittee on Economic and Consumer Policy of the Committee on Oversight and Reform of the House of Representatives reported that US baby foods have levels of As, Pb, Cd, and Hg higher than current standards for food or water (Congress 2021a, b). Consumption of rice and rice containing foods are common in infants' and young children's food (e.g., during weaning) because of its putative organoleptic and nutritional value and relatively low allergenic potential. However, consumption of 1 3 rice and rice-based products relate to an increase of urinary As concentrations (Davis et al. 2017;Karagas et al. 2016;Signes-Pastor et al. 2018). Other non-essential elements, such as Cd and Pb, are also accumulated in foods grown with contaminated soil and water; the contamination comes from both natural and anthropogenic sources, such as agricultural and industrial activities (EFSA 2009b(EFSA , 2010. Regulations to set maximum allowable levels of non-essential elements in food have recently been proposed or established to decrease exposure (EC 2015(EC , 2021a(EC , 2021b(EC , 2021FDA 2020b). Yet, further efforts are necessary to successfully minimize earlylife toxic dietary exposures to protect public health (Congress 2021a; Nachman et al. 2018).
Very little data exist on biomarker measurements as internal exposures and exposure trends in essential and nonessential elements during the first year of life (Carignan et al. 2016;Karagas et al. 2016;Ljung et al. 2011;Signes-Pastor et al. 2017). Humans excrete several elements in urine after exposure and thus exposures can be assessed via urinary element concentrations (Fort et al. 2014). However, the rate of excretion in urine of each element may differ (ATSDR 2007(ATSDR , 2012a(ATSDR , 2010EFSA 2009a, b;Vacchi-Suzzi et al. 2016).
In this study, we hypothesized that urinary element concentrations, as an indicator of internal exposure to essential [i.e., cobalt (Co), chromium (Cr), copper (Cu), iron (Fe), manganese (Mn), molybdenum (Mo), nickel (Ni), and selenium (Se)] and non-essential [i.e., aluminum (Al), As, Cd, Hg, Pb, antimony (Sb), tin (Sn), vanadium (V), and uranium (U)] elements would increase in the first year of life following the introduction of foods other than breastmilk, including solid foods among previously exclusively breastfed infants. To test our hypothesis, we assessed element concentrations in urine samples collected at 6 weeks of age before weaning and approximately 1 year of age among the same infants. Moreover, based on our prior findings on rice consumption and As exposure early in life , we also investigated the associations between rice and rice-based products consumption and the concentrations of other non-essential and essential elements in urine samples from one-year-old infants.
Study Population
Our study comprised infants enrolled in the New Hampshire Birth Cohort Study (NHBCS), a longitudinal pregnancy cohort designed to examine the impacts of toxicants in drinking water and diet on maternal-child health. Since 2009, the NHBCS has recruited pregnant women 18-45 years of age at approximately 24-28 weeks of gestation from prenatal clinics in the rural state of New Hampshire. Eligibility criteria include English literacy, the use of a private, unregulated water system at home (e.g., private well), not planning to move during pregnancy, and a singleton birth as described previously (Gilbert-Diamond et al. 2011). Women were asked to complete a self-administered lifestyle and medical history questionnaire (Gilbert-Diamond et al. 2011;Karagas et al. 2016).
The Committee for the Protection of Human Subjects at Dartmouth College approved the study, and all participants provided written informed consent.
Urine and Food Diary Collection
Spot urine samples were collected at approximately 6 weeks and 1 year of age in cotton urine pads and stored in polyethylene sterile containers. Samples were aliquoted into 1.8 ml vials within 24-72 h and frozen at −80 °C until analysis (Carignan et al. 2015).
The urine samples collection took place after completing a 3-day food diary. Infants' parents or caregivers were asked to complete the food diary at the end of each day. The unstructured food diary included details of infants' food and beverage intake during 3 consecutive days (e.g., time of feeding and type and amount of foods/beverage consumed). The food diaries were collected on paper during clinical visits (Carignan et al. 2015;Karagas et al. 2016;Signes-Pastor et al. 2018).
Laboratory Analysis
We determined urinary concentrations of essential elements (i.e., Co, Cr, Cu, Fe, Mn, Mo, Ni, and Se) and non-essential elements (i.e., Al, As, Cd, Hg, Pb, Sb, Sn, U, and V) at the Trace Element Analysis Core at Dartmouth College. Urinary specific gravity was measured with a handheld refractometer with automatic temperature compensation (PAL-10S; ATAGO Co Ltd).
Elemental analysis of urine was conducted with an Agilent 8900 inductively coupled plasma-mass spectrometry (ICP-MS) in direct solution acquisition mode. Urinary As species concentrations were determined using the Agilent 8900 ICP-MS interfaced with an Agilent liquid chromatograph 1260 equipped with a Thermo AS7, 2 × 250-mm column, and a Thermo AG7, 2 × 50-mm guard column Signes-Pastor et al. 2020).
Several NIST human urine standard reference materials 2669 level I and level II were analyzed in each analysis batch. The average (standard deviation) recoveries across batches (n = 3) for arsenobetaine, DMA, MMA, and iAs were 105% (3), 115% (9), 100% (4), and 101% (11), respectively. The limit of detection (LOD) was calculated as the mean of the blank concentrations plus 3 times their standard deviation multiplied by the dilution factor. The average LOD across analysis batches for each essential and non-essential element of interest in this study is reported in Tables S1 and S2. Only when the ICP-MS standard calibration curve provided zero or negative values the value of LOD/√2 was imputed (Lubin et al. 2004). The remaining urine concentrations, even those below the LOD, were not imputed, taking advantage of the ICP-MS wide linear dynamic range (EFSA 2009a). Missing values were assumed to be at random. The Multivariate Imputation by Chained Equations (MICE) method was applied to impute the missing values with the average values obtained from 5 generated complete datasets (Buuren 2011).
Statistical Analysis
The urinary element concentrations, including the sum of urinary As species (ΣAs = inorganic arsenic + monomethylarsonic acid (MMA) + dimethylarsinic acid (DMA)), were divided by the specific gravity to correct for urine dilution (Nermell et al. 2008). The concentrations were positively skewed and thus they were natural logarithm transformed (Ln) before statistical analysis.
Our study population comprises 2 separately drawn subgroups from the NHBCS according to the availability of element concentrations in paired urine samples at 6 weeks and 1 year of age and one-year-old infants' consumption of rice and rice-based products.
Subgroup 1 was used to evaluate changes in urinary elements from 6 weeks to 1 year of age. Subgroup 1 contained 187 infants exclusively breastfed at 6 weeks of age with paired urine samples at 6 weeks and 1 year of age analyzed for essential and non-essential element concentrations; 82 infants with missing dietary information at 6 weeks of age; and 79 consumers of formula or solid food at 6 weeks of age were excluded (Fig. S1A). The urinary Al and Sn concentrations contained 43 missing values each, which were imputed using MICE (Buuren 2011). In this subgroup, dietary information on rice consumption at 1 year of age was not available. The subgroup 1 dietary information was used to identify exclusively breastfed infants at 6 weeks consuming solid food at 1 year of age. The dietary information and urine samples were collected in 2014-19.
Subgroup 2 was used to evaluate the association between rice and rice product intake and urinary elements. Subgroup 2 contained 147 one-year-old infants with information on rice consumption after excluding 5 infants without urinary essential and non-essential elements data (Fig. S1B) . In this subgroup, urinary Al, Sn, and Hg concentrations were excluded owing to the high proportion of imputed values (> 60%). The subgroup 2 dietary information regarding rice and rice-based product consumption and urine samples at 1 year of age were gathered in 2013-2014.
Using the infant study population subgroup 1, we assessed the urinary essential and non-essential element concentrations in samples collected at 6 weeks and 1 year of age descriptively and by performing paired t test analyses. We calculated the ratio between the concentrations at 1 year of age versus 6 weeks of age in the paired samples (i.e., 1 year 6 weeks urine concentrations) to explore magnitude of change in the urinary essential and non-essential element concentrations, as shown in Fig. S2. A ratio equal to 1 indicates that the concentrations did not change. We also performed the mixture approach Weighted Quantile Sum (WQS) regression using the assessment time point (i.e., 6 weeks vs. 1 year-binary) as the dependent variable. The WQS regression model included 40% of the dataset for training and 60% for validation, and 100 bootstrap samples for parameter estimation were assigned. The estimates of mixture effects and indicators of exposure importance (i.e., weights) were calculated with the WQS regression model by combining the exposures to an empirically weighted index (Carrico et al. 2015).
Using the infant study population subgroup 2, we evaluated urinary essential and non-essential element concentrations at 1 year of age in association with rice consumption within the 2 days prior to urine sample collection. Descriptive and two-sample t test analyses comparing rice consumers vs. non-rice consumers were performed.
Results
Both infant study population subgroups had a slightly uneven distribution of boys and girls (45%/55% and 56%/44% of boys/girls in subgroup 1 and 2, respectively). Mothers were generally married (> 90%), and about 80% of them had a college graduate or any postgraduate schooling (Table 1).
Among non-essential elements, the median ratio between concentrations at 1 year and 6 weeks of age concentrations ranged from 1.0 for U (i.e., no changes) to 14.8 for ∑As (i.e., a nearly 15-fold increase) ( Table S3). The median ratios for Al, Cd, Hg, Pb, Sb, Sn, and V ranged from 1.1 to 1.7 (Table S3). Among essential elements, the median ratio between 1 year and 6 weeks of age ranged from 0.9 for Mn to 31.8 for Mo. The median ratios for Co, Cr, Cu, Fe, Ni, and Se ranged from 0.9 to 1.9 (Table S3). The distribution of 1 year 6 weeks of age natural logarithm-transformed (Ln) urinary essential and non-essential element concentrations are shown in Fig. S2.
The overall analysis of exposure to the element mixture at 1 year of age versus 6 weeks of age using WQS model regression assigned the highest positive weights to urinary ∑As (i.e., 0.516) and Mo (i.e., 0.289) concentrations followed by urinary Co with a weight 0.081 (Fig. S3). Urinary ∑As, Mo, and Co represented 51.6%, 26.9%, and 8.1% of the total weights of the mixture. For Hg, V, Sb, Ni, Cr, and U, the positive weights ranged from 0.001 to 0.028 with a percentage contribution to the total weights of the mixture ranging from 0.1 to 2.8% (Fig. S3). The remaining elements had weighted index close to zero. The WQS model regression did not identify any negative weights.
In Subgroup 2, the consumption of rice at 1 year of age was associated with increased urinary ∑As and Mo concentrations with a p-value < 0.05 in two-samples t test analyses (Figs. 3, 4, and Table S4). The medians urinary ∑As were 2.96 and 1.88 µg/L for rice and no rice consumers, respectively. The medians urinary Mo concentrations were 67.01 and 45.90 µg/L for rice and no rice consumers, respectively (Table S4). Although urinary ∑As and Mo concentrations were only weakly correlated at 6 weeks of age (Spearman's ⍴ = 0.18, Fig. S4), they were moderately correlated at 1 year of age (⍴ = 0.64) (Fig. S5), among both infant rice (⍴ = 0.48) (Fig. S6) and non-rice consumers (⍴ = 0.55) (Fig. S7). The consumption of rice at 1 year of age was also associated with a borderline statistically significant increase in urinary Ni with a p-value of 0.053 in the two-sample t test analysis (Table S4).
Discussion
In our US-based exclusively breastfed infant study population, we found increased urinary concentrations of nonessential (i.e., Al, As, Cd, Hg, Pb, Sb, Sn, and V) and essential elements (i.e., Co, Fe, Mo, Ni, and Se) at 1 year compared to 6 weeks of age. Among 1-year-old infants, urinary ∑As and Mo concentrations were higher for infants who consumed rice and rice-based products. Inorganic arsenic is a well-known human carcinogen with increasing evidence that early-life exposure may increase the risk of a wide range of detrimental health effects (i.e., neurological, cardiovascular, respiratory, and metabolic diseases) with impacts throughout the life course (Farzan et al. 2016;IARC 2012;Rodríguez-Barranco et al. 2016;Signes-Pastor et al. 2019. We observed a median increase in infants' urinary ∑As concentrations of 15-fold at 1 year compared to that at 6 weeks of age, concentrations at 1 year of age correlated with consumption of rice and rice-based products. The 1-year-old infants' urinary ∑As are in line with earlier studies with an increased ∑As exposure during weaning in infants 6 to 9 month of age in the US with a median (range) Rice may contain higher As than other cereals and vegetables (Signes-Pastor et al. 2008;Williams et al. 2007). To reduce inorganic arsenic exposure, the maximum level of 100 µg/kg has been enforced for rice destined to produce foods for infants and young children in Europe (EC 2015). In the US, the 100 µg/kg of inorganic arsenic level in infant rice Fig. 1 Urinary non-essential element concentrations in urine samples collected at 6 weeks and 1 year of age from the same set of infants. N = 187. ◆ Statistically significant paired t test (p-value < 0.05. Table S3). 6W = 6 weeks of age. 1Y = 1 year of age. Notice that the scale of the y-axis varies to facilitate the visualization of the concentrations in each plot. The As concentrations refer to the sum of inorganic arsenic, monomethylarsonic acid, and dimethylarsinic acid cereals is an action level but not a regulation (FDA 2020b), which could limit manufacturer compliance Congress 2021a;FDA 2020a). Our study includes data gathered before the FDA action level was finalized in August 2020 (FDA 2020a), thus further studies will need to evaluate more recent exposures.
Besides inorganic arsenic, exposure to Cd, Hg, and Pb is also of public health concern. Cadmium is a human carcinogen, and Pb and Hg are strong neurotoxicants (ATSDR 1999(ATSDR , 2007(ATSDR , 2012aEFSA 2015EFSA , 2010EFSA , 2009b. There is no defined safe level of exposure to inorganic arsenic, Cd, Hg, or Pb, yet detectable levels are being reported in baby foods (Brody and Houlihan 2019; Congress 2021a, b). This may explain the increased exposure to these non-essential elements in our infant study population between 6 weeks and 1 year of age. The current FDA plan, Closer to Zero, aims to reduce infants' and young children's exposure to toxic elements from food, but the effectiveness of the plan still needs to be evaluated (FDA 2021). Likewise, the European Commission has recently enforced stricter regulations regarding maximum limits of Cd and Pb in a wide variety of foods to reduce exposure (EC 2021a, b).
Of the other non-essential elements, ingestion of Al, Sb, Sn, and V from diet is among the primary exposure routes for non-occupationally exposed adults (ATSDR 2005a(ATSDR , 2008(ATSDR , 2012b(ATSDR , 2008EFSA 2005). Consistent with this, we observed increased urinary concentrations in our oneyear-old infants from 6 weeks of age. Aluminum is also Fig. 2 Urinary essential element concentrations in urine samples collected at 6 weeks and 1 year of age from the same set of infants. N = 187. ◆ Statistically significant paired t test (p-value < 0.05. Table S3). 6W = 6 weeks of age. 1Y = 1 year of age. Notice that the scale of the y-axis varies to facilitate the visualization of the concentrations in each plot associated with neurotoxicity (Dórea and Marques 2010). The median urinary Al concentration of 113.6 µg/L in our one-year-old infants was slightly higher than the upper bound reference value in urine for adults of 110 µg/L (Caroli et al. 1994;EFSA 2008) and thus warrants further investigation. At 1 year of age, the urinary levels of Sb, Sn, and V were each relatively low (ATSDR 2012b;Poddalgoda et al. 2016), and the levels of the essential elements Co, Fe, and Se reached similar levels to those reported in the general population (ATSDR 2003(ATSDR , 2004Bresson et al. 2015;Pfrimer et al. 2014). The median urinary Ni concentration of 3.29 µg/L at 1 year of age was higher than the upper bound reference value of 3 µg/L for healthy adults (ATSDR, 2005b). Further studies are necessary to assess the health impact of the overall real-life simultaneous exposures to essential and non-essential elements (ATSDR 2012b;Poddalgoda et al. 2016).
In our mixture exposure assessment using WQS regression, the highest positive weights were assigned to urinary ∑As and Mo concentrations, suggesting that they are the largest contributors of the exposure mixture of essential and non-essential elements during weaning. The joint effect of an exposure mixture of inorganic arsenic and Mo on children's health is still scarce (García-Villarino et al. 2021); however, both have been related to an increased oxidative stress (Domingo-Relloso et al. 2019;Tolins et al. 2014).
Urinary Mo concentrations were related to rice and rice product consumption among one-year-old infants. Rice is Fig. 3 Association between urinary non-essential element concentrations and rice consumption at 1 year of age. N = 147. ◆ Statistically significant two-sample t test (p-value < 0.05. Table S4). Notice that the scale of the y-axis varies to facilitate the visualization of the concentrations in each plot. The As concentrations refer to the sum of inorganic arsenic, monomethylarsonic acid, and dimethylarsinic acid a source of the essential element Mo (Huang et al. 2019), and urine is the dominant excretion route for Mo (ATSDR 2020). This may explain the increased urinary Mo with rice consumption. Ingestion of Mo is a cofactor for important enzymes, such as aldehyde oxidase, xanthine dehydrogenase, sulfite oxidase, and amidoxime reducing component (Huang et al. 2019). The urinary Mo concentrations in our one-year-old infants were similar to the urinary Mo concentrations reported in a prior study of 496 US residents including both urban and rural communities, both males and females, and persons aged 6-88 years from all major ethnicities with a median (interquartile range: Q1-Q3) of 56.5 (27.9-93.9) µg/L (ATSDR 2020; Paschal et al. 1998). Rice can also accumulate Ni (Cao et al. 2017), which may also explain the higher urinary concentration trend among rice consumers compared to non-rice consumers. Among rice consumers, the median urinary Ni concentration (3.48 µg/L) was higher than the upper bound reference value for healthy adults (ATSDR 2005b).
While our findings are based on a modest sample size from a well-characterized cohort study, we nevertheless observed statistically significant increases in urinary concentrations of several essential and non-essential elements during their first year of life. However, the potential contribution of metabolic changes in the kinetics and excretion of essential and non-essential elements during children's first year of life still needs to be explored (Skröder Löveborn et al. 2016). Urinary multi-element analysis using mass spectrometry was Fig. 4 Association between urinary essential element concentrations and rice consumption at 1 year of age. N = 147. ◆ Statistically significant two-sample t test (p-value < 0.05. Table S4). Notice that the scale of the y-axis varies to facilitate the visualization of the concentrations in each plot performed following established protocols (Pirkle 2012). In addition, we also performed urinary As speciation and calculated the summation of inorganic arsenic, MMA, and DMA (∑As) excluding non-toxic organoarsenical compounds (i.e., arsenobetaine) as a proxy for inorganic arsenic exposure, which allowed us to control for As exposure misclassification from unmetabolized forms (Jones et al. 2016). We used rice and rice-based products data from a food diary completed just before a spot urine sample collection, where element concentrations were determined as an internal exposure biomarker. This approach allowed us to capture rapidly excreted essential and non-essential elements in urine, such as As and Mo (ATSDR 2020; Meharg et al. 2014); however, urinary concentrations may not provide an accurate measurement of recent exposure for elements slowly released in the urine, such as Co and Cd (ATSDR 2004(ATSDR , 2012a. For the latter, urinary concentrations are a biomarker of long-term exposure (Vacchi-Suzzi et al. 2016). It is also important to bear in mind that the dietary information gathered with a food diary based on 3 consecutive days might not represent children's typical food consumption pattern.
Information regarding biomarker concentrations of essential and non-essential elements among infants over their first year of life is scant, and despite concerns regarding nonessential elements in foods marketed for infants, limited data exist on whether such foods increase infant biomarker concentrations. Yet infancy is a crucial period of development and a time when sensitivity to toxicants may be greatest. Future efforts should aim to reduce toxic dietary exposures while preserving beneficial nutrients in foods consumed by infants and young children. | 2022-06-16T13:57:51.887Z | 2022-06-16T00:00:00.000 | {
"year": 2022,
"sha1": "557e3a6e805609d3e32dd5c1d25bd68749b7d742",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12403-022-00489-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "466b7f541111073df66894d5ba8907026a6fef05",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
252147889 | pes2o/s2orc | v3-fos-license | Analysis of the Impact of Population Structure Change on Housing Pricei China
It is very important to discuss the impact of China's demographic structure change on the real estate market in the process of gradually disappearing “demographic dividend” and increasing aging. In this paper, unitary linear regression model was adopted to explore the impact of dependency ratio changes on housing price based on panel data of Provinces in China from 2002 to 2019. Further heterogeneity analysis method was adopted to perform grouping regression from different dependency ratios, regions and time perspectives. The results of this study are as follows: (1) the total dependency ratio has a negative relationship with housing price. (2) There is a positive relationship between elderly population dependency ratio and housing price, and a negative relationship between children dependency ratio and housing price. (3) There is a negative relationship between the total dependency ratio and housing price in eastern and western China, a positive relationship between the total dependency ratio and housing price in northeast China, and no significant relationship between the total dependency ratio and housing price in central China. (4) Before the census year 2010, dependency ratio showed a negative relationship with housing price, while after the census year 2010, dependency ratio showed a positive relationship with housing price.
INTRODUCTION
High housing prices have seriously affected social stability and economic development. Between 2000 and 2019, the total average selling price of commercial housing in China rose from 2,112 yuan per square meter to 9,310 yuan per square meter. Analyzing the factors behind China's housing prices is crucial to understanding the current situation of China's real estate market and whether the government can adopt effective regulation policies. At the same time, in recent years, demographic changes have also attracted much attention. Population structure refers to the result of dividing the population according to different criteria. Dependency ratio is also a reflection of demographic structure. Dependency ratio, also known as dependency coefficient, refers to the ratio of non-working age population to the number of working age population in the population. The larger the dependency ratio is, the more the number of dependents per worker is, which means the more serious the burden of dependents is. Therefore, this paper will make use of Chinese provincial panel data from 2002 to 2019 to conduct an empirical analysis on the relationship between dependency ratio and housing price by adopting unitary linear regression and heterogeneity analysis method, and provide Chinese case lists and Strategies for the research on population structure and housing price.
LITERATURE REVIEW
Real estate market has become one of the important force to promote China's economic and social development, but in recent years due to China's commercial housing price increases too rapidly, leading to supply and demand imbalance, for China's economic development caused some negative effects, and, more importantly, because of high house prices do not match with national income, affect social harmony and people's well-being [1].
In the research related to this paper, part of the literature discusses the influence effect of housing price. Peng Xushu and Zhang Xiao start from the theoretical mechanism of influencing factors of regional innovation capability and price factors influencing regional innovation capability, and analyze the influence of price factors such as labour force, land and real estate, energy resources and financial assets on regional innovation capability [2]. Yu Yinxia and Zhao Zhongxiu [3] empirically studied the relationship between housing price and import and export trade based on panel data of 30 provinces in China from 2007 to 2018, and found that there was a significant "inverted U-shaped" relationship between housing price and import and export trade on the whole, and there were regional differences. The mechanism test shows that, in the left side of "inverted U", real estate investment expands the promotion effect of housing price on export trade in eastern China, and consumption expands the promotion effect of housing price on import trade in central China. On the right side of "inverted U", real estate investment further aggravates the dampening effect of housing price on export trade in eastern China, and consumption also aggravates the dampening effect of housing price on import trade in central China. Li et al through the nonlinear framework, such as the east again looked at the differences in regional financial dependence on land and economic development strength, house prices affect heterogeneity mechanism of local government debt risk, the study found that home prices are effect of reducing the risk of local government debt as land financial dependence on deepening gradually weakened and produce negative effects, and power of economic development is to strengthen the housing prices ease the role of local government debt risk [4]. Sun Weizeng et al empirically investigated the impact of short-term housing price fluctuations on individual education choices by using the micro data of the 2010 National population census and the micro data of the 2005 and 2015 National population sampling surveys, and solved the endogenous problem by identifying the structural breakpoints of housing price changes [5].
Another part of the literature focuses on the influencing factors of housing price. Wang Chongrun and Zhao Chang used the data of 30 provinces and cities from 2004 to 2019 to adopt the mediating effect model, and found that although population aging inhibited the rise of housing price on the whole, inheritance motivation promoted the rise of housing price, while risk-free asset preference inhibited the rise of housing price. There is obvious regional heterogeneity in the influencing channels of population aging on housing price. The mediating effect of population aging through inheritance motivation in eastern and western regions is significantly greater than that without risk preference, while the mediating effect is opposite in central regions [6]. Jipeng Liu makes an empirical analysis of China's provincial panel data from 2000 to 2019 and finds that: at the national level, aging population has become a factor affecting the fluctuation of China's commercial housing price. At the regional level, the impact of population aging on commercial housing prices shows significant regional differences. Specifically, population aging in developed regions has a significant negative impact on the price of commercial housing, while population aging in less-developed regions has no significant correlation with the price of commercial housing, while population aging in developing regions has a significant positive impact on the price of commercial housing [7]. Some literatures also discuss the influencing factors and effects of housing price. Is celebrating and ren-ling Liu Chengqing and Ren Ling through the establishment of 2002-2018 national 270 PVAR prefecture level panel data model, based on housing price to income ratio grouping classification index return, the study found that the urbanization and urban and rural income gap enlargement have pushed prices rise, house prices increase could suppress the urbanization and widening income gap between urban and rural areas, the city is in the high housing price to income ratio influence is far greater than the influence of the city is low housing price to income ratio [8].
Based on the above literature, the text may have the following four contributions. First, in terms of research methods, this paper will conduct three types of heterogeneity analysis, namely dependency ratio heterogeneity analysis, regional heterogeneity analysis and time heterogeneity analysis. Second, in terms of the theme, few articles focus on the evaluation and analysis of China's housing price by dependency ratio. This paper takes dependency ratio as a representative of the population structure to explore its impact on China's housing price and its countermeasures. Third, this paper adopts the data from The National Bureau of Statistics of China from 2002 to 2019. The data are comprehensive, relatively new and close to reality, which is conducive to reflecting the overall problems in China in the past two decades and helping to draw more accurate and timely economic conclusions. Fourth, in the world under the big background premise of growing aging population, the population structure and its impact on housing prices research is particularly important, so this article conclusion and policy suggestion is of important theoretical significance and practical value.
Explained Variables
The explained variable in this paper is housing price. Based on existing studies, this paper adopts average selling price of commercial housing (pri) as a proxy variable, and performs logarithmic processing (lpri) on this variable in the empirical stage of this paper.
Explanatory Variables
This paper uses total dependency ratio (peo) as the core explanatory variable to measure China's population Advances in Economics, Business and Management Research, volume 648 structure. At the same time, elderly dependency ratio (old) and adolescent dependency ratio (chi) were selected as explanatory variables to conduct heterogeneity analysis.
Data Sources
In the empirical part of this paper, 31 provinces, municipalities or autonomous regions in China were selected as the objects of empirical analysis, and the observation period was from 2002 to 2019. However, in China, the census is conducted once every 10 years, that is, at the end of the year when "0", a census is carried out every five years; at the end of the year on the "5", and is a sample survey based on a census held every 10 years. During the sample period in this paper, all dependency ratio data except census year (2010) were sampled data, and the data of census year (2010) differed greatly from the sample data before and after the census year (2010) and were not comparable. Therefore, dependency ratio data of 2010 were excluded in this paper. The sample data used in this paper came from the National Bureau of Statistics (NBS), and descriptive statistical analysis of variables was conducted. The results are shown in Table 1.
Model Setting
The research model of this paper is unitary linear regression, which is as follows: Where, lpri is the explained variable, namely the average sales price of commercial housing after logarithm, peo is the total dependency ratio of the core explanatory variable, and ε is the random error term. Figure 1 depicts the relationship between the average selling price of commercial housing and the dependency ratio, where pri is the average selling price of commercial housing (yuan/m 2 ) and peo is the dependency ratio (%). As can be seen preliminarily from the trend in the chart, the average selling price of commercial housing decreases with the increase of dependency ratio. The relationship between average selling price of commercial housing and dependency ratio needs to be further confirmed by regression analysis in this paper.
Figure1
Correlation between average selling price of commercial housing (yuan/m 2 ) and dependency ratio (%)
Baseline Regression
Model (1) in Table 2 reports the main regression results of dependency ratio affecting the average selling price of commercial housing. Specifically, the estimated coefficient of the core explanatory variable of the model is -0.0389, which is significantly negative at the significance level of 1%. The regression results show that the dependency ratio in China has a negative impact on the average selling price of commercial housing. This is because the rising dependency ratio indicates that the proportion of China's non-working age population is increasing, and the ability to create social wealth is declining rapidly, which leads to a decline in real estate demand and thus lower housing price. Note: ***, ** and * indicate that the statistical value is significant at the significance level of 1%, 5% and 10%, and the z value is in parentheses. The same below. Table 2 respectively report the regression results of the dependency ratio of the elderly population and the dependency ratio of the juvenile population on the average selling price of commercial housing. The estimated coefficients of explanatory variables are 0.1398 and -0.0861 respectively, both of which are significantly positive at the significance level of 1%. The regression results show that the dependency ratio of the elderly is conducive to the increase of the average selling price of commercial housing, but the dependency ratio of the children is not conducive to the average selling price of commercial housing. This is mainly because the rising ratio of children to children will increase the burden of family support, which will lead to less capital investment in housing consumption, thus reducing housing demand and leading to the decline of housing prices. Elderly families in China are often willing to use their savings to buy homes for their children, in part boosting demand for real estate and driving up prices.
Regional Heterogeneity Analysis
According to the 2011 classification method of the National Bureau of Statistics, all provinces in China are divided into four regions: eastern, central, western and northeastern regions, in order to reveal the regional characteristics of the influence of dependency ratio on the average selling price of commercial housing (The eastern region consists of 10 provinces, including Beijing, Hebei, Tianjin, Shandong, Shanghai, Jiangsu, Zhejiang, Guangdong, Hainan and Fujian. The central region consists of 7 provinces, including Inner Mongolia Autonomous Region, Shanxi, Henan, Anhui, Jiangxi, Hubei and Hunan provinces. The western region consists of 11 provinces, including Xinjiang Uygur Autonomous Region, Tibet Autonomous Region, Gansu Province, Qinghai Province, Sichuan Province, Yunnan Province, Guangxi Zhuang Autonomous Region, Ningxia Hui Autonomous Region, Guizhou Province, Chongqing Municipality and Shaanxi Province. The Northeast region consists of 3 provinces, including Heilongjiang, Jilin and Liaoning.). Table 3 model (1), (2), (3), (4), respectively, reported the dependency ratio affect China's eastern, central and western, the return of the northeast China commodity house average sales price as a result, the explanatory variables of estimated coefficients were respectively 0.0408, 0.0165, 0.0209 and 0.0594, the estimated coefficients with the eastern region and western region of the variables through a 1% significance level, northeast China through a 5% significance level, but the central region failed to pass the significance level of the central. Table 3. Analysis of regional heterogeneity
Time Heterogeneity Analysis
This paper will take the 2010 census year as the time node and divide the sample period into two periods to conduct time heterogeneity analysis. Model (1) in Table 4 reports the regression results of dependency ratio influencing the average selling price of commercial housing before the census year 2010. The estimated coefficient of explanatory variable is -0.0524, and the regression result is significantly negative at the significance level of 1%. Table 4 Model (2) reports the regression results of dependency ratio influencing the average selling price of commercial housing after the census year 2010. The estimated coefficient of explanatory variable is 0.0475, and the regression results are significantly positive at the significance level of 1%. The regression results in Table 4 show that dependency ratio has a negative impact on the average selling price of China's commercial housing before the census year 2010, while dependency ratio has a positive
CONCLUSIONS
According to the above empirical results, this paper considers that the total dependency ratio has a negative relationship with the average selling price of commercial housing in China. From the perspective of different dependency ratios, the dependency ratio of the elderly population is positively correlated with the average selling price of China's commercial housing, while the dependency ratio of children is negatively correlated with the average selling price of China's commercial housing. According to the four regions of China, the average selling price of commercial housing in the eastern and western regions decreases with the increase of the total dependency ratio, the average selling price of commercial housing in the northeast increases with the increase of the total dependency ratio, and the dependency ratio has no significant effect on the average selling price of commercial housing in the central region. From the perspective of different time intervals, the dependency ratio before the census in 2010 has a negative relationship with the average selling price of China's commercial housing, while the dependency ratio after the census in 2010 has a positive relationship with the average selling price of China's commercial housing.
POLICY RECOMMENDATIONS
The implications of the above conclusions lie in the following three points. First, in view of the negative impact of the total dependency ratio on housing price, the government should take corresponding policies to avoid the adverse impact of the possible depression of the real estate market on the economy. Second, explore the establishment of coping with the aging of housing prices rise policy system, actively cultivate the elderly consumer industry, guide the elderly to establish a new concept of consumption; In view of the negative impact of rising child dependency ratio on housing prices, we should be alert to the risks brought by a rapid decline in housing prices to economic development. Third, the systemic impact of rising dependency ratios on housing prices in different regions should be treated differently in light of local conditions. Fourth, the current two-child policy will further increase China's child dependency ratio, and actively respond to the rising child dependency ratio on the downward driving effect of housing prices. | 2022-09-09T15:14:30.556Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "c5e5700da734e69b04f37bcc3357c2eb0a0e803f",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125971666.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f785b6aacb5741164487ccd8f9dbf1677293bb72",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
270568445 | pes2o/s2orc | v3-fos-license | Boosting PRRSV-Specific Cellular Immunity: The Immunological Profiling of an Fc-Fused Multi-CTL Epitope Vaccine in Mice
Simple Summary This study introduces an innovative strategy for a cytotoxic T-lymphocyte (CTL) epitope-based multi-epitope vaccine to enhance cellular immunity against porcine reproductive and respiratory syndrome virus (PRRSV), aiming to surpass the limitations of existing vaccines. It pioneers the use of conserved CTL epitopes from PRRSV in vaccine development, eliciting a robust cellular immune response in a mouse model that makes it a potential vaccine candidate for the future. Abstract The continuously evolving PRRSV has been plaguing pig farms worldwide for over 30 years, with conventional vaccines suffering from insufficient protection and biosecurity risks. To address these challenges, we identified 10 PRRSV-specific CTL epitopes through enzyme-linked immunospot assay (ELISPOT) and constructed a multi-epitope peptide (PTE) by linking them in tandem. This PTE was then fused with a modified porcine Fc molecule to create the recombinant protein pFc-PTE. Our findings indicate that pFc-PTE effectively stimulates PRRSV-infected specific splenic lymphocytes to secrete high levels of interferon-gamma (IFN-γ) and is predicted to be non-toxic and non-allergenic. Compared to PTE alone, pFc-PTE not only induced a comparable cellular immune response in mice but also extended the duration of the immune response to at least 10 weeks post-immunization. Additionally, pFc-PTE predominantly induced a Th1 immune response, suggesting its potential advantage in enhancing cellular immunity. Consequently, pFc-PTE holds promise as a novel, safe, and potent candidate vaccine for PRRSV and may also provide new perspectives for vaccine design against other viral diseases.
Introduction
Porcine reproductive and respiratory syndrome virus (PRRSV) is one of the most severe pathogens affecting the global pig industry.Since its initial report in the United States in the late 1980s, it continues to dominate the swine populations in most countries.PRRSV infection leads to reproductive and respiratory disorders, characterized by respiratory symptoms in piglets and reproductive failure, fetal death, and congenital infection in sows [1,2].PRRSV is an enveloped, single-stranded, non-segmented, positive-sense RNA virus [3].According to the latest 2022 classification by the International Committee on Taxonomy of Viruses (ICTV), PRRSV is now divided into two distinct species, namely Betaarterivirus suid 1 (formerly PRRSV-1, European genotype) and Betaarterivirus suid 2 (formerly PRRSV-2, North American genotype), both belonging to the Betaarterivirus genus (formerly Porartevirus genus), family Arteriviridae.
The current PRRSV vaccines on the market, which include inactivated and attenuated vaccines, are unable to provide sufficient protection against PRRSV infection.Inactivated vaccines present exogenous antigens, leading to antigen presentation primarily through the MHC class II pathway by antigen-presenting cells (APCs), which results in humoral immunity with a weak cellular immune response [4].However, these vaccines elicit a comparatively weaker cellular immune response, which occurs through antigen crosspresentation via the MHC class I pathway.Attenuated PRRSV vaccines address these issues but raise a series of biosecurity concerns, such as the risk of vaccine-derived reversion to virulence, viral genome recombination, and virus shedding [5][6][7][8].Moreover, both inactivated and attenuated PRRSV vaccines share the problem of inducing a large number of non-neutralizing antibodies, which can delay and reduce the production of neutralizing antibodies and may cause antibody-dependent enhancement (ADE) effects [8,9].Therefore, there is an urgent need to develop more effective and safer PRRSV vaccines.
Epitope-based peptide vaccines, designed to target specific epitopes of pathogens, exhibit high specificity.B-cell epitope peptide vaccines can significantly enhance humoral immune responses but face challenges of insufficient cellular immunity and mutational escape [10].In contrast, T-cell epitope peptide vaccines, especially cytotoxic T-lymphocyte (CTL) epitopes, elicit strong cellular immune responses that can cooperate with humoral immune responses, leading to the long-term success of vaccines [11].Specifically, CTL epitopes promote the killing of virus-infected cells by CD8+ T cells through antigen crosspresentation, thereby enhancing the protective effect of the vaccine [12,13].Furthermore, designing vaccines that target conserved epitopes is beneficial for developing a universal vaccine to overcome viral mutations.
Epitope-based peptide vaccines also possess additional advantages.They get rid of toxic components and decoy epitopes from the viral proteins, making them particularly suitable for combating viral infections that produce a large number of non-neutralizing antibodies and significant ADE effects [14].Moreover, peptide vaccines are easy to design and produce, allowing for rapid adaptation to pathogen variation [15].Additionally, they have shown great potential in the treatment of tumors and autoimmune diseases [16,17].However, peptide vaccines have some drawbacks, such as poor targeting, short half-life, and low presentation efficiency [10].To overcome these challenges, their performance can be enhanced by fusion with Fc molecules.The fusion of multi-epitope with the Fc molecules not only extends the plasma half-life by binding to Fc receptors for more sustained immunostimulation but also enhances cellular immunity by mediating antibody-dependent cellular cytotoxicity (ADCC) through FcγR or complement-dependent cytotoxicity (CDC) through complement C1q [18].
Our previous study (published in Chinese) constructed a fusion protein by fusing the neutralizing B-cell epitopes of PRRSV in tandem with the Fc molecule of mouse IgG.The fusion protein induced high-level humoral immunity and a number of neutralizing antibodies in mice and prolonged the presence of specific antibodies in the sera.To enhance the cellular immune response against PRRSV, in this study, we used enzymelinked immunospot assay (ELISPOT) to screen multiple CTL epitopes of PRRSV, linked them in tandem, and then fused them with a modified porcine Fc molecule.Thereafter, the recombinant fusion protein was expressed in a prokaryotic system to achieve a high yield and high purity.Finally, the ability to induce cellular immune responses was evaluated by measuring cytokine levels.The aim of this study was to develop a novel anti-PRRSV epitope-based peptide vaccine to enhance cellular immune responses and overcome the limitations of existing vaccines.Our vaccine design strategy provides valuable insights for the development of next-generation PRRSV candidate vaccines and also serves as a reference for the design of vaccines against other viruses.
Virus Strains and Cell Lines
The PRRSV strain used in this study was isolated from the lung tissue of a pig with porcine reproductive and respiratory syndrome.The amino acid sequence of GP5 showed a 98.5% similarity to the GP5 sequence in the JXA1-R strain (vaccine strain), belonging to lineage 8.3, with three amino acid mutations at positions 35 (N to H), 59 (N to K), and 164 (R to G). MARC-145 cells were purchased from the BeNa Culture Collection (BNCC359828, Beijing, China) for PRRSV propagation and titration.
Animal Grouping and Immunization
Animal experiments were divided into two batches (Figure 1).The first batch of animals was used to evaluate the immunogenicity of synthetic peptides and recombinant proteins.Ten 4-week-old weaned piglets were purchased from an established PRRS-negative population (Large White/Duroc/Yorkshire hybrid breed).The pigs were confirmed to be double-negative for PRRSV nucleic acid and antibodies before immunization.They were divided into two groups: One group of five piglets was vaccinated with 1 × 10 4.5 TCID50 of commercial attenuated vaccine (JXA1-R strain) via intramuscular injection.At three weeks post-vaccination, each piglet was challenged with 5 × 10 3.5 TCID50 of the isolated JXA1 strain through intranasal (half-dose) and intramuscular (half-dose) routes.The other group was injected with PBS as a negative control.Peripheral venous blood was aseptically collected from the anterior vena cava three weeks post-challenge for the isolation of peripheral blood mononuclear cells (PBMCs).At six weeks post-challenge, the pigs were euthanized, and the spleen was harvested for the isolation of splenic lymphocytes.The second batch of animals was used to evaluate the immunological effects of the recombinant protein.Six-week-old SPF BALB/c female mice were divided into four groups of five, each vaccinated with PTE, pFc, and pFc-PTE, and the fourth group with PBS as a negative control.The immunization route was subcutaneous injection, with a recombinant protein dose of 20 µg/0.1 mL with Montanide™ Gel 01 ST adjuvant (SEPPIC), followed by a second immunization three weeks later with a halved dose.Blood and serum were collected from the mandibular vein of mice every 2 weeks for 10 weeks.
Virus Strains and Cell Lines
The PRRSV strain used in this study was isolated from the lung tissue of a pig with porcine reproductive and respiratory syndrome.The amino acid sequence of GP5 showed a 98.5% similarity to the GP5 sequence in the JXA1-R strain (vaccine strain), belonging to lineage 8.3, with three amino acid mutations at positions 35 (N to H), 59 (N to K), and 164 (R to G). MARC-145 cells were purchased from the BeNa Culture Collection (BNCC359828, Beijing, China) for PRRSV propagation and titration.
Animal Grouping and Immunization
Animal experiments were divided into two batches (Figure 1).The first batch of animals was used to evaluate the immunogenicity of synthetic peptides and recombinant proteins.Ten 4-week-old weaned piglets were purchased from an established PRRS-negative population (Large White/Duroc/Yorkshire hybrid breed).The pigs were confirmed to be double-negative for PRRSV nucleic acid and antibodies before immunization.They were divided into two groups: One group of five piglets was vaccinated with 1 × 10 4.5 TCID50 of commercial attenuated vaccine (JXA1-R strain) via intramuscular injection.At three weeks post-vaccination, each piglet was challenged with 5 × 10 3.5 TCID50 of the isolated JXA1 strain through intranasal (half-dose) and intramuscular (half-dose) routes.The other group was injected with PBS as a negative control.Peripheral venous blood was aseptically collected from the anterior vena cava three weeks post-challenge for the isolation of peripheral blood mononuclear cells (PBMCs).At six weeks post-challenge, the pigs were euthanized, and the spleen was harvested for the isolation of splenic lymphocytes.The second batch of animals was used to evaluate the immunological effects of the recombinant protein.Six-week-old SPF BALB/c female mice were divided into four groups of five, each vaccinated with PTE, pFc, and pFc-PTE, and the fourth group with PBS as a negative control.The immunization route was subcutaneous injection, with a recombinant protein dose of 20 µg/0.1 mL with Montanide™ Gel 01 ST adjuvant (SEPPIC), followed by a second immunization three weeks later with a halved dose.Blood and serum were collected from the mandibular vein of mice every 2 weeks for 10 weeks.
PBMCs and Splenic Lymphocyte Isolation
Briefly, 10 mL of anticoagulated blood from the anterior vena cava was diluted with 10 mL of D-Hanks buffer containing penicillin and streptomycin and then slowly layered onto 10 mL of lymphocyte separation medium and centrifuged at 800× g for 20 min.After centrifugation, the middle ring of the milky-white PBMC-enriched layer was aspirated, washed twice with an equal volume of D-Hanks buffer, and centrifuged at 250× g for 10 min to obtain PBMCs.The cells were resuspended in an RPMI 1640 medium, and cell viability was assessed using the trypan blue exclusion method followed by cell counting.After euthanizing the animals 7 weeks post-priming, the spleens were collected to assess the level of IFN-γ and IL-4 via ELISPOT.To isolate splenic lymphocytes, the spleens were aseptically removed, and a splenocyte suspension was prepared from the immunized PRRSV spleen in PBS.After the addition of 2 mL of red blood cell lysis buffer and centrifugation at 4 • C at 250× g for 10 min, the splenocytes were harvested.The splenocytes (1 × 10 6 cells/mL) were plated in 24-well plates and cultured in an RPMI 1640 medium (Gibco, La Quinta, CA, USA) supplemented with 10% fetal bovine serum (Gibco) and 100 µg/mL penicillin-streptomycin (Hyclone, Logan, UT, USA).
ELISPOT and ELISA
The ELISPOT assay was conducted using Porcine IFN-γ and IL-4 ELISPOT kits (R&D Systems, Minneapolis, MN, USA) following the manufacturer's protocols.Briefly, 1 × 10 6 PBMCs were mixed with an equal volume of peptide (concentration 10 µg/mL) and added to a 96-well plate precoated with anti-IFN-γ antibody, with a volume of 100 µL per well.Positive controls included 10 µg/mL of phytohemagglutinin (PHA, Sigma-Aldrich, St. Louis, MO, USA) and PRRSV at a multiplicity of infection (MOI) of 1, while PBS served as a negative control.Cells were incubated at 37 • C with 5% CO 2 for 24 h.After incubation, PBMCs were removed from the wells, and biotinylated detection antibodies, streptavidin-HRP, and TMB were used for subsequent incubations and color development.Once the PVDF membrane was air-dried, the spots were analyzed using an enzyme-linked spot analysis system.In the ELISPOT experiment with splenic lymphocytes, recombinant proteins PTE, pFc, and pFc-PTE were used for stimulation, and IFN-γ and IL-4 were measured, with other steps being essentially identical to those for PBMCs.The concentrations of cytokines in the sera were measured using mouse IL-2, IL-4, IL-5, and IL-10 ELISA kits (Solarbio, Beijing, China) and mouse IFN-γ and IL-12 ELISA kits (Beyotime, Shanghai, China) following the manufacturer's protocols.
Peptide Screening and Synthesis
PRRSV encodes 16 non-structural proteins and 8 structural proteins.Based on the CTL epitopes collected from the literature, which are mainly distributed on 5 structural proteins (GP3, GP4, GP5, M, and N) and 2 non-structural proteins (Nsp2 and Nsp9) of different PRRSV genotypes, and considering the overlapping nature of some reported peptides as a single long peptide (10-15 amino acid residues), a total of 22 CTL epitopes were ultimately identified (Table 1).These epitope peptides were synthesized by GenScript (Nanjing, China) and confirmed to have a purity of over 95% by reverse-phase high-performance liquid chromatography (RP-HPLC) and mass spectrometry.Solubility tests were conducted in ddH 2 O, DPBS, or DMSO, with a storage concentration of 1-2 mg/mL.
Construction, Expression, and Purification of Fc Fusion Proteins
The recombinant plasmids pET30-PTE, pET30-pFc, and pET30-pFc-PTE were transformed into competent Escherichia coli BL21 (DE3) pLysS (Takara, Dalian, China).After identifying positive monoclonal colonies, the resulting expression strains were grown in Luria broth containing 50 µg/mL Kanamycin at 600 nm OD to 0.6-0.8 and then induced with 0.5 mM isopropyl β-D-1-thiogalactopyranoside at 37 • C for 3 h.Bacteria were collected by centrifugation and used for soluble protein analysis and purification.Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was used to assess the expression levels of recombinant proteins in E. coli, followed by ultrasonic treatment in a lysis buffer (100 mM NaH 2 PO 4 •2H 2 O, 50 mM imidazole, 10 mM Tris-HCl, 300 mM NaCl, and 100 mM KCl, pH 8.5) with 0.5% Triton X-100 and 5 mM β-mercaptoethanol.The supernatant was then purified by Ni-NTA affinity chromatography (GE Healthcare, Buckinghamshire, UK).Finally, protein concentration was determined using the Pierce BCA protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA).
Statistical Analyses
Student's t-test and one-way analysis of variance (ANOVA) with Tukey's multiplecomparison test were applied to analyze the data in GraphPad Prism 6.0.The data are shown as means ± standard deviations (SDs).A p-value less than 0.05 was considered statistically significant (* p < 0.05, ** p < 0.01, and *** p < 0.001).
The 10 CTL Epitopes Significantly Stimulated PBMCs to Release IFN-γ
A comparative analysis between the reported twenty-two CTL epitopes and the sequences of the PRRSV-2 epidemic strain JXA1 was performed, as detailed in Table 1.Thirteen corresponding sequences in the JXA1 strain were completely identical to the CTL epitopes, while the remaining nine sequences presented mutations at specific amino acid positions.To enhance the specificity of the peptides in the experiment, all the sequences used in this study were derived from the JXA1 strain.Subsequently, the online T-epitope prediction software NetMHCcons-1.1 was used to assess the binding affinity of the twentytwo CTL epitopes to the seven most abundant SLA alleles in hybrid pigs, as reported by Cao et al. [13] (Table 1).Eight CTL epitopes showed high binding affinity to specific SLA molecules, namely NSP9-TCE3, NSP3-TCE1, NSP3-TCE2, NSP4-TCE1, NSP4-TCE2, GP5-TCE1, M-TCE3, and N-TCE1.Another five epitopes showed weaker binding affinity, namely NSP2-TCE2, NSP9-TCE1, NSP9-TCE2, GP5-TCE3, and M-TCE2.
GP4
To evaluate the immunostimulatory activity of the twenty-two CTL epitopes, PBMCs were isolated from pigs three weeks after the PRRSV (JXA-1) challenge and then co-cultured with the individually synthesized peptides of CTL epitopes, followed by an ELISPOT assay to quantify INF-γ secreting cells.The results showed that ten CTL peptides could specifically stimulate the PBMCs, resulting in a considerable number of INFγ secreting cells (>100 spots/1 × 10 6 ) compared to the remaining epitopes (Figure 2A).This result was generally consistent with the predictions of NetMHCcons-1.1 (Table 1).Among them, NSP9-TCE3 and GP4-TCE2 were the most effective (>200 spots/1 × 10 6 ).Notably, NSP9-TCE2 (YASAAAILM) was fully conserved across all the genotypes of PRRSV, which is crucial for the development of a universal vaccine against the highly mutable nature of PRRSV.
Design and Bioinformatics Analysis of Epitope Peptides Fused with Fc Protein
The design of the multi-epitope peptide was achieved by concatenating the 10 screened epitopes using either the flexible short linker GPGPG or AAY.Following evaluation with NetMHCcons-1.1, the concatenation strategy was designated as Poly-T-Epitopes (PTE) (Figure 2B).Subsequently, to construct the Fc fusion protein, the porcine IgG-derived (Uniprot: K7ZLA7) pFc molecule was retained with its CH2, CH3, and hinge regions.Moreover, two universal T-helper agonists, PADRE and TpD, were incorporated to enhance T-cell targeting efficiency [25].Ultimately, pFc-PTE was generated by inserting PTE into pFc.
In addition, a bioinformatics analysis was conducted on the three recombinant proteins (Figure 2C), indicating a sequential decrease in antigenicity from PTE, pFc-PTE to pFc.Both PTE and pFc-PTE exhibited relatively high immunogenicity, while pFc demonstrated minimal immunogenicity, likely due to its nature as an endogenous protein in animals.Allergenicity prediction results showed that pFc-PTE and pFc are non-allergenic, whereas PTE was predicted to be allergenic.Toxicity prediction results indicated that all three proteins were non-toxic; however, PTE scored higher, indicating a greater potential for toxicity.Taken together, compared to the simple concatenation of epitope peptides, the fusion with pFc resulted in a reduction in the antigenicity (by 28%) and immunogenicity (by 51%) but improved the allergenicity and toxicity profiles of PTE (as evidenced by the numerical values).
The Recombinant Proteins Were Successfully Expressed and Purified
The encoding genes for PTE and pFc were codon-optimized and synthesized and then cloned into the pET30a vector.The construction of pET30-pFc-PTE was achieved through the enzymatic cleavage and ligation of PTE onto the pET30-pFc vector.Three recombinant plasmids were expressed in E. coli BL21(DE3) cells.The three proteins were then purified by Ni-NTA affinity chromatography (Figure 2D).Protein concentrations were determined using the BCA method, yielding peak concentrations of 1.28 mg/mL for PTE, 1.15 mg/mL for pFc, and 0.43 mg/mL for pFc-PTE.It is worth noting that the binding efficiency of pFc and pFc-PTE to nickel resin was not optimal, with much of the protein lost in the flow-through, suggesting that purification conditions could be optimized to improve protein yield.
pFc-PTE Stimulated PRRSV-Specific Splenic Lymphocytes to Secrete High Levels of IFN-γ
To determine whether the CTL epitope peptides retain the ability to be recognized by specific T cells after concatenation and fusion with pFc, the following experiment was conducted: The pigs immunized with the JXA1-R vaccine and later challenged with the JXA1 strain were euthanized at week 6.Splenic lymphocytes were harvested for in vitro stimulation with PTE, pFc, and pFc-PTE.Antigen specificity of T cells was measured by IFN-γ and IL-4 ELISPOT.Both PTE and pFc-PTE potently induced T cells to secrete IFN-γ and IL-4, with no significant difference in IFN-γ levels, while a significant difference was observed in IL-4 secretion (Figure 3A,B).These findings suggest that the fusion with pFc attenuates the Th2-inducing immunogenicity of PTE but does not compromise Th1 immune responses.Collectively, despite concatenation and pFc fusion, CTL epitope peptides remained recognizable by PRRSV-activated T cells, thereby retaining its immunogenicity.
pFc-PTE Induce a More Persistent Cellular Immune Response
To explore whether the fusion of pFc prolongs the clearance time of PTE in vivo, mice were immunized with PTE, pFc, and pFc-PTE.Subsequently, blood samples were collected, and the levels of IFN-γ and IL-4 in the sera were determined using ELISA kits.Within two weeks after the first immunization, the levels of IFN-γ and IL-4 in all experimental groups increased slightly, with no significant difference observed, suggesting the successful activation of the mice's immune system by the recombinant proteins (Figure 3C,D).Notably, a pronounced increase in IFN-γ and IL-4 levels was detected at week 4 post-immunization, attributable to the booster immunization administered at week 3.
The PTE group exhibited peak cytokine levels at week 4 after initial immunization, while the pFc and pFc-PTE groups demonstrated peak levels at week 6.Subsequently, a sharp decline in IFN-γ and IL-4 levels was observed in the PTE group at weeks 6 and 8, contrasting with a more gradual decrease in the pFc and pFc-PTE groups, and it was not until at least 10 weeks after immunization that they returned to levels comparable to those in the PTE group.These findings imply that pFc fusion may enhance the persistence of PTE in vivo, providing a sustained stimulus for immune response induction.The prolonged response is likely due to the extended half-life of the pFc component, which facilitates enhanced antigen presentation and immune complex formation through its interaction with Fc receptors, thereby extending the immune response duration [26].
pFc-PTE Induce a More Persistent Cellular Immune Response
To explore whether the fusion of pFc prolongs the clearance time of PTE in vivo, mice were immunized with PTE, pFc, and pFc-PTE.Subsequently, blood samples were collected, and the levels of IFN-γ and IL-4 in the sera were determined using ELISA kits.Within two weeks after the first immunization, the levels of IFN-γ and IL-4 in all experimental groups increased slightly, with no significant difference observed, suggesting the successful activation of the mice's immune system by the recombinant proteins (Figure 3C,D).Notably, a pronounced increase in IFN-γ and IL-4 levels was detected at week 4 post-immunization, attributable to the booster immunization administered at week 3.
The PTE group exhibited peak cytokine levels at week 4 after initial immunization, while the pFc and pFc-PTE groups demonstrated peak levels at week 6.Subsequently, a sharp decline in IFN-γ and IL-4 levels was observed in the PTE group at weeks 6 and 8, contrasting with a more gradual decrease in the pFc and pFc-PTE groups, and it was not until at least 10 weeks after immunization that they returned to levels comparable to those in the PTE group.These findings imply that pFc fusion may enhance the persistence of PTE in vivo, providing a sustained stimulus for immune response induction.The prolonged response is likely due to the extended half-life of the pFc component, which facilitates enhanced antigen presentation and immune complex formation through its interaction with Fc receptors, thereby extending the immune response duration [26].
pFc-PTE Induced a Th1-Biased Immune Response
It is noteworthy that, in mice injected with PTE, pFc, and pFc-PTE, all three antigens elicited measurable levels of IFN-γ and IL-4 in the sera at 5 weeks after initial vaccination but with different peak values.The PTE group exhibited the highest peak of IFN-γ, with no significant difference from the pFc-PTE group, while the IFN-γ level in the pFc group was much lower than in the other two groups (Figure 4A).On the other hand, the PTE group also had a markedly higher peak of IL-4 than the pFc and pFc-PTE groups, with a
pFc-PTE Induced a Th1-Biased Immune Response
It is noteworthy that, in mice injected with PTE, pFc, and pFc-PTE, all three antigens elicited measurable levels of IFN-γ and IL-4 in the sera at 5 weeks after initial vaccination but with different peak values.The PTE group exhibited the highest peak of IFN-γ, with no significant difference from the pFc-PTE group, while the IFN-γ level in the pFc group was much lower than in the other two groups (Figure 4A).On the other hand, the PTE group also had a markedly higher peak of IL-4 than the pFc and pFc-PTE groups, with a minor and non-significant difference in IL-4 levels between the pFc and pFc-PTE groups (Figure 4B).
IFN-γ and IL-4 are cytokines characteristic of Th1 and Th2 cell types, respectively, with IFN-γ signifying cell-mediated immunity and IL-4 indicating humoral immunity [27].To confirm that the immune response induced by pFc-PTE is predominantly of the Th1 type, we further measured the levels of four additional representative cytokines using ELISA.The changes in IL-2 and IL-12 (Th1-type cytokines) levels exhibited similar patterns to those of IFN-γ (Figure 4C,D).Additionally, the patterns of IL-5 and IL-10 (Th2-type cytokines) levels were consistent with those of IL-4 (Figure 4E,F).These findings indicate that pFc-PTE primarily elicits a Th1-biased immune response.
minor and non-significant difference in IL-4 levels between the pFc and pFc-PTE groups (Figure 4B).IFN-γ and IL-4 are cytokines characteristic of Th1 and Th2 cell types, respectively, with IFN-γ signifying cell-mediated immunity and IL-4 indicating humoral immunity [27].To confirm that the immune response induced by pFc-PTE is predominantly of the Th1 type, we further measured the levels of four additional representative cytokines using ELISA.The changes in IL-2 and IL-12 (Th1-type cytokines) levels exhibited similar patterns to those of IFN-γ (Figure 4C,D).Additionally, the patterns of IL-5 and IL-10 (Th2type cytokines) levels were consistent with those of IL-4 (Figure 4E,F).These findings indicate that pFc-PTE primarily elicits a Th1-biased immune response.
Discussion
PRRSV, as a major pathogen in the global pig industry, presents substantial challenges to vaccine research and development.The economic losses and concerns for animal welfare emphasize the critical need for novel vaccine development.Current vaccine strategies against PRRSV, including inactivated and attenuated vaccines, encounter several hurdles such as inadequate immunological protection, biosecurity risks, and the risk of ADE effects.These issues underscore the pressing need for innovative vaccine approaches.The present study introduced a pFc-PTE fusion protein designed to enhance and prolong the cellular immune response to PRRSV.This innovation significantly enhances both the theoretical and practical approaches to combating PRRSV in the swine industry.
Epitope vaccines, which offer better safety profiles than traditional live-attenuated vaccines, have been shown in numerous studies to have positive effects [28][29][30].Additionally, epitope vaccines have a superior ability to cope with viral strain mutations compared to conventional vaccines [31].This study addresses the limitations of epitope vaccines, including their suboptimal targeting and short half-life, by designing a fusion with the pFc.The Fc region targets immune cells expressing Fcγ receptors on their surface, thereby endowing the fused epitope peptides with robust immunogenicity and significantly enhancing the immune response of immune cells.Additionally, the production of these epitope peptides is rapid and straightforward, allowing for large-scale expression in Escherichia coli, which makes the production cost highly attractive.
Fc molecules have been applied in studies aimed at combating PRRSV infection.Studies have shown that the fusion of PRRSV receptors, sialoadhesin (Sn) and CD163, with Fc enhanced receptor half-life and cytotoxicity, conferring protection against PRRSV infection [32,33].Additionally, Fc fusion has been demonstrated to enhance the immunogenicity of the PRRSV GP5 protein, eliciting the production of specific and neutralizing antibodies against PRRSV GP5 in mice [34].Furthermore, Fc fusion facilitates the entry of PRRSV Nsp9-specific nanobodies into the monocyte-macrophage lineage cells, which highly express Fcγ receptors, thereby inhibiting PRRSV replication and extending the duration of action [35].Building upon these studies, the present study introduces a novel approach by focusing on CTL epitopes, which are crucial for activating specific T-cell responses.The fusion with the Fc molecule not only enhanced the immunogenicity of PTE but also potentially improved its immunological effectiveness by prolonging the half-life and enhancing antigen presentation.Moreover, by using a series of bioinformatics analysis tools, we conducted a comprehensive assessment of the immunogenicity, antigenicity, and potential toxic side effects of the fusion protein, providing some predicted evidence for the safety of the vaccine.pFc-PTE showed robust immunogenicity and sustained cellular immune responses in mice, and the results preliminarily demonstrated the feasibility of improving PRRSV cellular immune levels through the tandem fusion of CTL epitopes and Fc molecules.However, the translation of these findings to pigs is a critical next step.Our future studies will involve experiments in pigs to validate the immunogenicity and protective efficacy of pFc-PTE.That will provide valuable insights into the vaccine's potential in a more clinically relevant setting.Additionally, establishing the optimal immunization dosage and delivery strategy, as well as evaluating long-term immunological effects and safety, requires further exploration.Furthermore, the cross-protective ability of the pFc-PTE fusion protein against different PRRSV variants has not been assessed, which is particularly important in the context of PRRSV's high variability.There are currently seven representative strains from different genotypes of PRRSV (Lelystad virus, VR-2332, CH-1a, JXA1, NADC30, NADC34, and RFLP 1-4 lineage1C).Due to the high mutation rate of PRRSV, only NSP9-TCE2 (YASAAAILM) is fully conserved among these representative strains.Therefore, in order to improve the cross-protection ability against different PRRSV variants, it may be necessary to develop epitope pools that contain as many mutated epitopes as possible for immunization.
In summary, the pFc-PTE fusion protein shows potential in promoting Th1 cellular immune responses and prolonging the duration of the immune response, providing a new strategy for developing novel, safe, and highly effective PRRSV vaccines.Future research should focus on in vivo experiments, clinical evaluations, evaluating cross-protective capabilities, and assessing immunological effects against different PRRSV variants.Through these studies, a comprehensive understanding of the immunological and protective effects of the pFc-PTE fusion protein can be achieved, offering safer and more effective strategies for PRRSV control and inspiring new approaches to vaccine design for other viral diseases.
Figure 1 .
Figure 1.Schematic representation of animal grouping and immunization protocol.Figure 1.Schematic representation of animal grouping and immunization protocol.
Figure 1 .
Figure 1.Schematic representation of animal grouping and immunization protocol.Figure 1.Schematic representation of animal grouping and immunization protocol.
Figure 2 .
Figure 2. Construction, evaluation, and purification of recombination proteins: (A) Assessment of the immunostimulatory efficiency of the 22 CTL epitopes.After synthesis, these epitopes were used
Figure 2 .
Figure 2. Construction, evaluation, and purification of recombination proteins: (A) Assessment of the immunostimulatory efficiency of the 22 CTL epitopes.After synthesis, these epitopes were used to timulate PBMCs from experimental pigs (n = 5) that had been immunized with the JXA1-R vaccine and subsequently challenged with the JXA1 wild-type strain.The secretion of INF-γ was quantified by ELISPOT assay.(B) Schematic diagram of the genetic structures of the recombinant proteins.(C) Prediction of antigenicity, immunogenicity, allergenicity, and toxicity.(D) SDS-PAGE of purified proteins.Sup: supernatant after ultrasonic disruption and centrifugation.FT: flow-through (containing unbound proteins), E1-E4: eluted proteins, 1 mL/tube.The red wireframe indicates the location of the purified proteins.
Table 1 .
Summary and selection of PRRSV CTL epitopes. | 2024-06-19T15:10:19.539Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "60760b3d9dcb40404acc3ffc1f2eb8bae97a14f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/11/6/274/pdf?version=1718421916",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a2f5e7e1cc8526604f33146014e94ea6afe515d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236211771 | pes2o/s2orc | v3-fos-license | Comprehensive Evaluation of Biological Effects of Pentathiepins on Various Human Cancer Cell Lines and Insights into Their Mode of Action
Pentathiepins are polysulfur-containing compounds that exert antiproliferative and cytotoxic activity in cancer cells, induce oxidative stress and apoptosis, and inhibit glutathione peroxidase (GPx1). This renders them promising candidates for anticancer drug development. However, the biological effects and how they intertwine have not yet been systematically assessed in diverse cancer cell lines. In this study, six novel pentathiepins were synthesized to suit particular requirements such as fluorescent properties or improved water solubility. Structural elucidation by X-ray crystallography was successful for three derivatives. All six underwent extensive biological evaluation in 14 human cancer cell lines. These studies included investigating the inhibition of GPx1 and cell proliferation, cytotoxicity, and the induction of ROS and DNA strand breaks. Furthermore, selected hallmarks of apoptosis and the impact on cell cycle progression were studied. All six pentathiepins exerted high cytotoxic and antiproliferative activity, while five also strongly inhibited GPx1. There is a clear connection between the potential to provoke oxidative stress and damage to DNA in the form of single- and double-strand breaks. Additionally, these studies support apoptosis but not ferroptosis as the mechanism of cell death in some of the cell lines. As the various pentathiepins give rise to different biological responses, modulation of the biological effects depends on the distinct chemical structures fused to the sulfur ring. This may allow for an optimization of the anticancer activity of pentathiepins in the future.
Introduction
Pentathiepins are a class of compounds characterized by a seven-membered ring consisting of five sulfur and two carbon atoms [1]. First synthetic representatives, including a simple benzopentathiepin (Figure 1a), were already described in 1971 but not investigated in detail regarding possible biological activities [2]. This was first conducted in the early 1990s, when the natural pentathiepin varacin (Figure 1b) was isolated from marine Ascidiacea and shown to possess potent cytotoxic activity [3]. Thereafter, intensified research attributed interesting biological effects to this striking class of compounds, such as antifungal, antiviral, antibacterial, and DNA-cleaving activity, as well as cytotoxicity in cancer cell lines [3][4][5][6][7]. Furthermore, pentathiepins were reported to be specific inhibitors of protein kinase C (PKC), the striatal-enriched protein tyrosine phosphatase The GPx1 constitutes one of eight human isoforms of glutathione peroxidases and one of the five that contain a selenocysteine in their catalytic center [13,14]. The most prevalent isoforms are the cytoplasmic GPx1 and the membrane-bound GPx4, which maintain the physiological intracellular redox state and protect the membrane from lipid peroxidation, respectively. The latter is a key player in ferroptosis, an iron-dependent modality of cell death that can be induced by inactivation of GPx4 [15,16]. GPx1 is a key enzyme in cellular redox regulation; its role in cancer initiation, progression, and treatment has been diversely discussed in the literature. In several publications, a high GPx1 expression correlated with a poor outcome prediction [17,18], while others detected decreased GPx1 level in cancerous tissue compared with healthy tissue, thus ascribing the GPx1 a protective effect [19,20]. However, chemotherapeutic resistance mechanisms have been linked to increased expression of GPx1 [21]. Moreover, in a recent publication, GPx1 knockout cells proved to be significantly more sensitive to treatment with the chemotherapeutics cisplatin, lomustine, and temozolomide compared with the parental cell line [22]. Hence, therapeutic inhibition of the GPx1 may be a promising approach for potentiating cancer chemotherapy in drug-resistant cells, one which could be pursued by using pentathiepins.
With this strategy in mind, we have examined novel pentathiepins not only with regards to their potential to inhibit the GPx1 but also in the context of cytotoxicity in cancer cells [12]. The five pyrrolo[1,2-a]quinoxaline and three indole-based pentathiepins studied in our previous paper exceeded the GPx1 inhibition potential of the thus far best characterized GPx1 inhibitor, mercaptosuccinic acid (MSA, Figure 1d) [23]. Among the published series of pentathiepins, one representative was selected and further investigated regarding its biological effects ( Figure 1). One key finding was the increase of intracellular reactive oxygen species (ROS) levels up to fourfold compared with a solvent-treated control. This was in accordance with earlier studies that postulated a reaction of the polysulfur ring system with thiols, thereby forming H2O2 via polysulfide anion intermediates [24]. In addition, the accumulation of ROS may result from the decreased capacity of the GPx1 enzyme; GPx1 actually detoxifies peroxides (e.g., H2O2) but was inhibited by the pentathiepins. Overexposure of cancer cells to intracellular ROS, The GPx1 constitutes one of eight human isoforms of glutathione peroxidases and one of the five that contain a selenocysteine in their catalytic center [13,14]. The most prevalent isoforms are the cytoplasmic GPx1 and the membrane-bound GPx4, which maintain the physiological intracellular redox state and protect the membrane from lipid peroxidation, respectively. The latter is a key player in ferroptosis, an iron-dependent modality of cell death that can be induced by inactivation of GPx4 [15,16]. GPx1 is a key enzyme in cellular redox regulation; its role in cancer initiation, progression, and treatment has been diversely discussed in the literature. In several publications, a high GPx1 expression correlated with a poor outcome prediction [17,18], while others detected decreased GPx1 level in cancerous tissue compared with healthy tissue, thus ascribing the GPx1 a protective effect [19,20]. However, chemotherapeutic resistance mechanisms have been linked to increased expression of GPx1 [21]. Moreover, in a recent publication, GPx1 knockout cells proved to be significantly more sensitive to treatment with the chemotherapeutics cisplatin, lomustine, and temozolomide compared with the parental cell line [22]. Hence, therapeutic inhibition of the GPx1 may be a promising approach for potentiating cancer chemotherapy in drug-resistant cells, one which could be pursued by using pentathiepins.
With this strategy in mind, we have examined novel pentathiepins not only with regards to their potential to inhibit the GPx1 but also in the context of cytotoxicity in cancer cells [12]. The five pyrrolo[1,2-a]quinoxaline and three indole-based pentathiepins studied in our previous paper exceeded the GPx1 inhibition potential of the thus far best characterized GPx1 inhibitor, mercaptosuccinic acid (MSA, Figure 1d) [23]. Among the published series of pentathiepins, one representative was selected and further investigated regarding its biological effects ( Figure 1). One key finding was the increase of intracellular reactive oxygen species (ROS) levels up to fourfold compared with a solvent-treated control. This was in accordance with earlier studies that postulated a reaction of the polysulfur ring system with thiols, thereby forming H 2 O 2 via polysulfide anion intermediates [24]. In addition, the accumulation of ROS may result from the decreased capacity of the GPx1 enzyme; GPx1 actually detoxifies peroxides (e.g., H 2 O 2 ) but was inhibited by the pentathiepins. Overexposure of cancer cells to intracellular ROS, exceeding the physiological equilibrium, can have a number of dire consequences for cells. This includes the damage of DNA via strand breaks and the induction of apoptosis as well as interference with normal mitochondrial function, as it was recently confirmed by Behnisch-Cornwell et al. after treating cells with a GPx1-inhibiting pentathiepin [12].
The currently available data suggest a broad spectrum of biological effects mediated by pentathiepins that potentially contribute to their anti-cancer activity. Not only the inhibition of the GPx1 renders these polysulfur structures an interesting class of compounds; their great capability to prevent the proliferation of cancer cells and the induction of apoptosis as a controlled form of cell death labels them as potentially useful anti-tumor treatments. Still, no attempts have been made as of yet to explore possible structure-activity relationships (SARs), and comparisons of various biological effects over several cell lines are also lacking. Furthermore, the inhibition of GPx1 and the cytotoxicity caused by the pentathiepins have not been directly correlated to date.
The present publication describes the synthesis of six additional pentathiepins ( Figure 2) and their comprehensive biological evaluation. Pentathiepin 1 was synthesized and fused to a pyrrolo-pyrazine scaffold with the objective to produce a fluorescent compound that would facilitate the analysis of intracellular distribution; pyrrolo-annelated N-heterocycles are known cores of chromophores with fluorescent properties [25]. Five of the new compounds (2-6) possess a nicotinamide backbone, a structure that potentially increases water solubility and is well known for its biological activity [26]. This scaffold is substituted with either a piperidine (2), morpholine (3), N,N-diethylamine (4), p-fluorophenone-piperazine (5), or p-tosyl-piperazine (6). Piperazines in particular are frequently used in the rational design of therapeutic agents [27]. Additionally, the introduction of fluorine is widely applied in pharmaceutical medicinal chemistry. It can increase metabolic stability but also enhance binding affinities to a protein target or serve as tracer for 19 F-NMR spectroscopy studies [28][29][30]. exceeding the physiological equilibrium, can have a number of dire consequences for cells. This includes the damage of DNA via strand breaks and the induction of apoptosis as well as interference with normal mitochondrial function, as it was recently confirmed by Behnisch-Cornwell et al. after treating cells with a GPx1-inhibiting pentathiepin [12]. The currently available data suggest a broad spectrum of biological effects mediated by pentathiepins that potentially contribute to their anti-cancer activity. Not only the inhibition of the GPx1 renders these polysulfur structures an interesting class of compounds; their great capability to prevent the proliferation of cancer cells and the induction of apoptosis as a controlled form of cell death labels them as potentially useful anti-tumor treatments. Still, no attempts have been made as of yet to explore possible structure-activity relationships (SARs), and comparisons of various biological effects over several cell lines are also lacking. Furthermore, the inhibition of GPx1 and the cytotoxicity caused by the pentathiepins have not been directly correlated to date.
The present publication describes the synthesis of six additional pentathiepins ( Figure 2) and their comprehensive biological evaluation. Pentathiepin 1 was synthesized and fused to a pyrrolo-pyrazine scaffold with the objective to produce a fluorescent compound that would facilitate the analysis of intracellular distribution; pyrroloannelated N-heterocycles are known cores of chromophores with fluorescent properties [25]. Five of the new compounds (2-6) possess a nicotinamide backbone, a structure that potentially increases water solubility and is well known for its biological activity [26]. This scaffold is substituted with either a piperidine (2), morpholine (3), N,N-diethylamine (4), p-fluorophenone-piperazine (5), or p-tosyl-piperazine (6). Piperazines in particular are frequently used in the rational design of therapeutic agents [27]. Additionally, the introduction of fluorine is widely applied in pharmaceutical medicinal chemistry. It can increase metabolic stability but also enhance binding affinities to a protein target or serve as tracer for 19 F-NMR spectroscopy studies [28][29][30]. The chemical structures of the novel pentathiepins 1-6 that were synthesized and biologically evaluated in this study. Compound 1 contains an aryl-substituted pyrrolo-pyrazine designed to exert fluorescence for intracellular trackability. For the other structures, a nicotinamide backbone was substituted with either a piperidine (2), morpholine (3), N,N'-diethylamine (4), p-fluorophenone-piperazine (5), or p-tosyl-piperazine (6).
From a biological perspective, we present a more detailed adaptation of the postulated mechanism of action ( Figure 3). First, we assessed the potential of the compounds to inhibit the isolated GPx1 and then screened for cytotoxicity across a panel of 14 human cancer cell lines under normoxic and hypoxic conditions. In selected cell lines, the induction of oxidative stress was analyzed, and subsequently the DNAdamaging potential was studied. Particular emphasis was directed towards the cell death Figure 2. The chemical structures of the novel pentathiepins 1-6 that were synthesized and biologically evaluated in this study. Compound 1 contains an aryl-substituted pyrrolo-pyrazine designed to exert fluorescence for intracellular trackability. For the other structures, a nicotinamide backbone was substituted with either a piperidine (2), morpholine (3), N,N'-diethylamine (4), p-fluorophenone-piperazine (5), or p-tosyl-piperazine (6).
From a biological perspective, we present a more detailed adaptation of the postulated mechanism of action ( Figure 3). First, we assessed the potential of the compounds to inhibit the isolated GPx1 and then screened for cytotoxicity across a panel of 14 human cancer cell lines under normoxic and hypoxic conditions. In selected cell lines, the induction of oxidative stress was analyzed, and subsequently the DNA-damaging potential was studied. Particular emphasis was directed towards the cell death mechanism, with experiments covering the induction of apoptosis or ferroptosis. Moreover, we evaluated how the availability of oxygen and glutathione affects the action of pentathiepins, as well as whether the novel compounds inhibit other antioxidant enzymes such as catalase and glutathione reductase. These investigations shed light on the cellular effects of pentathiepins and in what manner oxidative stress, DNA damage, and apoptosis, as well as cell cycle aberrations, intertwine and contribute to the high cytotoxicity of these compounds. mechanism, with experiments covering the induction of apoptosis or ferroptosis. Moreover, we evaluated how the availability of oxygen and glutathione affects the action of pentathiepins, as well as whether the novel compounds inhibit other antioxidant enzymes such as catalase and glutathione reductase. These investigations shed light on the cellular effects of pentathiepins and in what manner oxidative stress, DNA damage, and apoptosis, as well as cell cycle aberrations, intertwine and contribute to the high cytotoxicity of these compounds. Figure 3. The mechanism of action exerted by pentathiepins in terms of the postulations of previous studies and adapted in order to clarify the objectives for the current comprehensive biological evaluations [12,24]. Magnifying glasses highlight the subjects of the present study. Compound 1 was synthesized from 2,6-dibromo-3-amino-pyrazine (I) via two sequential Sonogashira cross-coupling steps. Initially, the cross-coupling of compound (I) with alkyne synthon (II) in the Pd(II)/Cu(I) catalytic system resulted in the Sonogashira product. The resultant crude reaction mixture was utilized in the ring-closing step in the presence of sodium hydride to form the pyrrolo-pyrazine unit (III). Compound III was isolated in 78% yield after column purification and exhibited fluorescence on the TLC plate under a UV lamp at λ = 356 nm irradiation. It was then coupled with 3,3′-diethoxy- Figure 3. The mechanism of action exerted by pentathiepins in terms of the postulations of previous studies and adapted in order to clarify the objectives for the current comprehensive biological evaluations [12,24]. Magnifying glasses highlight the subjects of the present study. Compound 1 was synthesized from 2,6-dibromo-3-amino-pyrazine (I) via two sequential Sonogashira cross-coupling steps. Initially, the cross-coupling of compound (I) with alkyne synthon (II) in the Pd(II)/Cu(I) catalytic system resulted in the Sonogashira product. The resultant crude reaction mixture was utilized in the ring-closing step in the presence of sodium hydride to form the pyrrolo-pyrazine unit (III). Compound III was isolated in 78% yield after column purification and exhibited fluorescence on the TLC plate under a UV lamp at λ = 356 nm irradiation. It was then coupled with 3,3 -diethoxy-propyne under Sonogashira reaction conditions, with subsequent ring-closing by the molybdenum complex, resulting in the targeted fluorescent pentathiepin derivative 1 as a red solid in 18% isolated yield after column chromatography purification (35% EtOAc/hexane) (Scheme 1). The purified sample was characterized by 1 The respective nicotinamide derivatives were synthesized on the basis of a procedure described in the literature [31]. Piperazine was protected either with benzoyl or sulfonyl chloride, corresponding to secondary amine precursors "a'" and "b'", which were used in the following reactions (Scheme 2). Subsequently, 6-bromo nicotinic acid was allowed to react with secondary amines piperidine, morpholine, N,N'-diethylamine, and protected piperazines (a' and b') in the presence of 3-[bis(dimethylamino)methyliumyl]-3Hbenzotriazole-1-oxide hexafluorophosphate (HBTU) and the base diisopropylethylamine (DIPEA) at low temperatures to yield the respective amide derivatives Xa-e. These nicotinamide derivatives were further subjected to sequential reactions of the Pd(II)catalyzed Sonogashira cross-coupling, giving Ya-e. The following Mo IV -mediated ringclosing steps resulted in the desired nicotinamide-fused pentathiepins 2-6 in good yields (Scheme 2). The final structures of the compounds were confirmed by 1 H, 13 C, 19 F-NMR ( 19 F where applicable), APCI-MS, and elemental analysis (Supplementary Materials). The molecular structures of compounds 2, 3, and 4 were further verified by single-crystal Xray diffraction analysis ( Figure 4). Scheme 1. Synthesis of 11-methoxy-2-(4-methoxy < phenyl)-3H- [1][2][3][4][5]pentathiepino [6 ,7 :3,4]pyrrolo[1,2-a]pyrrolo[2,3e]pyrazine: (a) Pd(OAc) 2 5 mol %, PPh 3 7 mol %, 6 mol % CuI, Et 3 N, 3 equiv., CH 3 CN, 80 • C, overnight; (b) NaH, dry THF, 60 • C, 18 h, 78% (two steps); (c) Pd(PPh 3 ) 2 Cl 2 5 mol %, 1.2 equiv., 3,3 -diethoxypropyne, 3 mol % CuI, Et 3 N 3 equiv., DMF, rt, overnight; (d) (Et 4 N) 2 [MoO(S 4 ) 2 ] 0.5 equiv., S 8 1 equiv., DMF, 50 • C, 15 h.
Synthesis of Nicotinamide-Fused Pentathiepins (2-6)
The respective nicotinamide derivatives were synthesized on the basis of a procedure described in the literature [31]. Piperazine was protected either with benzoyl or sulfonyl chloride, corresponding to secondary amine precursors "a'" and "b'", which were used in the following reactions (Scheme 2). Subsequently, 6-bromo nicotinic acid was allowed to react with secondary amines piperidine, morpholine, N,N'-diethylamine, and protected piperazines (a' and b') in the presence of 3-[bis(dimethylamino)methyliumyl]-3Hbenzotriazole-1-oxide hexafluorophosphate (HBTU) and the base diisopropylethylamine (DIPEA) at low temperatures to yield the respective amide derivatives Xa-e. These nicotinamide derivatives were further subjected to sequential reactions of the Pd(II)-catalyzed Sonogashira cross-coupling, giving Ya-e. The following Mo IV -mediated ring-closing steps resulted in the desired nicotinamide-fused pentathiepins 2-6 in good yields (Scheme 2). The final structures of the compounds were confirmed by 1 H, 13 C, 19 F-NMR ( 19 F where applicable), APCI-MS, and elemental analysis (Supplementary Materials). The molecular structures of compounds 2, 3, and 4 were further verified by single-crystal X-ray diffraction analysis ( Figure 4).
Inhibition of Glutathione Peroxidase 1
The inhibition of GPx1 by pentathiepins has already been reported by us previously [12]. To investigate whether the new compounds 1-6 can also inhibit GPx1, we performed an enzymatic assay with tert-butylhydroperoxide (t-BHP) as substrate, which is reduced by bovine erythrocyte GPx1 under consumption of glutathione (GSH). The resulting glutathione disulfide (GSSG) was reduced back to GSH by glutathione reductase (GR), thereby converting NADPH to NADP + , which can be photometrically monitored at λ = 340 nm. The rate of NADPH consumption was used to quantify GPx1 enzyme activity. Dose-response graphs are presented in Figure 5a, and the corresponding IC 50 values in Table 1. Relative IC 50 describing the half-maximal inhibition of the enzyme were derived from the inflection point of the sigmoidal shaped graphs. Absolute IC 50 was calculated via thereby converting NADPH to NADP , which can be photometrically monitored at λ = 340 nm. The rate of NADPH consumption was used to quantify GPx1 enzyme activity. Dose-response graphs are presented in Figure 5a, and the corresponding IC50 values in Table 1. Relative IC50 describing the half-maximal inhibition of the enzyme were derived from the inflection point of the sigmoidal shaped graphs. Absolute IC50 was calculated via interpolation of the standard curve resulting in the concentration at which 50% of the GPx1 was inhibited. Four out of six pentathiepins, namely 1, 3, 4, and 5, had a strong inhibitory effect on the bovine erythrocyte GPx with IC 50 values between 0.56 and 0.92 µM (Table 1). However, residual enzyme activities at the highest tested concentration (12.5 µM) ranging from about 16 to roughly 30% were apparent. Compound 2 had an IC 50 of 2.28 µM and a residual activity similar to that of 3, 4, and 5, while 6 was unable to decrease the activity of the bovine GPx by more than 50% at 12.5 µM. In the GPx1 inhibition assay, five of the six novel pentathiepins had IC 50 values between 0.6 and 2.3 µM. Hence, compounds 1-5 were considerably more potent than the thus far best characterized inhibitor mercaptosuccinic acid (IC 50 of 5.9 µM) and just as effective as our recently published pentathiepins [12]. A morpholino (3) or N,N'-diethylamine (4) moiety fused to a nicotinamide backbone is considerably more potent than a piperidine (2). The introduction of a p-fluorobenzoylpiperazine (5) to the piperazine-nicotinamide structure instead of a p-tosyl residue (6) drastically increased the inhibitory activity, rendering it similarly effective as 3 and 4. Pentathiepin 1, originally designed to serve as a trackable compound, inhibited the GPx1 with an IC 50 of about 0.9 µM but with a much lower residual activity at the highest tested concentration (16% vs. 30% for compounds 2-5 at 12.5 µM), demonstrating that a pyrrolo-pyrazine-based pentathiepin can also serve as potent inhibitor of the enzyme.
To assess the specificity of inhibitors toward the GPx1, we performed enzymatic assays with other relevant antioxidant enzymes. These included the glutathione reductase (GR; yeast-derived), which is an auxiliary component of the GPx1 assay, and catalase (CAT; bovine), a key redox enzyme that fulfils a similar enzymatic task as the GPx1 by detoxifying hydrogen peroxide. None of the pentathiepins altered the activity of either enzyme at a concentration of 25 µM (Figure 5b,c).
Cytotoxic and Antiproliferative Activity
Distinct MTT cell viability and crystal violet cell proliferation assays were performed to assess the cytotoxicity of the compounds 1-6 on 14 human cancer cell lines. In the viability assay, cells with intact mitochondria and thus active dehydrogenases are competent to convert MTT to the corresponding insoluble formazan. The formazan can be photometrically detected after solubilization and allows for indirect conclusions about the viability of the cells as an IC 50 value [32,33]. In the crystal violet assay, cells are stained by the binding of the dye to the chromatin, thus allowing for a quantification of biomass that remains adherent after a treatment in comparison with a control after a specific time [34]. Potency is defined as the growth inhibitory concentration 50% (G I50 ).
With the pentathiepins 1-6 IC 50 (cell viability) and GI 50 (cell growth) values in the low-or even sub-micromolar range were observed ( Figure 6, Table S3). Notably, compound 6 was least cytotoxic with a mean of 2.36 µM throughout the panel, while 1 displayed a mean IC 50 Table S3. n ≥ 3 independent experiments.
In the crystal violet assay, the pentathiepins 1-6 gave sub-micromolar GI50 values over all cell lines, between 0.17 µM (3) and 0.57 µM (6). Cell lines that were most resistant to the compounds were the urinary bladder and the pancreatic cancer cell lines with mean GI50 values between 0.49 and 0.62 µM. Again, the weakest effect was observed for 6. Interestingly, pentathiepin 1, which did not affect the cellular viability of the MCF-7 breast cancer cell line up to 10 µM, strongly inhibited proliferation with a GI50 of 0.21 µM.
In addition to performing the cell viability assay under normoxia (19% atmospheric oxygen), incubations were also conducted under hypoxic conditions, i.e., atmospheric oxygen levels of 1% during the treatment for 48 h. Hypoxia is a favorable condition of the tumor microenvironment that is connected with poor outcome [35]. Therefore, the low oxygen condition mimics better the tumor environment. The biological effect of the pentathiepins under this condition provides insight into their dependency for oxygen, which has been postulated a key component in the proposed mechanism of action ( Figure 3). Consistent with the proposed mechanism of action for five of the compounds, potency decreased under hypoxic conditions; in fact for 6, the half-maximal inhibitory concentration exceeded the highest tested concentration of 10 µM in four cell lines ( Figure 6, Table S3). As the exception, pentathiepin 1 showed no or potentiating effects regarding the cy- Table S3. n ≥ 3 independent experiments.
In the crystal violet assay, the pentathiepins 1-6 gave sub-micromolar GI 50 values over all cell lines, between 0.17 µM (3) and 0.57 µM (6). Cell lines that were most resistant to the compounds were the urinary bladder and the pancreatic cancer cell lines with mean GI 50 values between 0.49 and 0.62 µM. Again, the weakest effect was observed for 6. Interestingly, pentathiepin 1, which did not affect the cellular viability of the MCF-7 breast cancer cell line up to 10 µM, strongly inhibited proliferation with a GI 50 of 0.21 µM.
In addition to performing the cell viability assay under normoxia (19% atmospheric oxygen), incubations were also conducted under hypoxic conditions, i.e., atmospheric oxygen levels of 1% during the treatment for 48 h. Hypoxia is a favorable condition of the tumor microenvironment that is connected with poor outcome [35]. Therefore, the low oxygen condition mimics better the tumor environment. The biological effect of the pentathiepins under this condition provides insight into their dependency for oxygen, which has been postulated a key component in the proposed mechanism of action ( Figure 3). Consistent with the proposed mechanism of action for five of the compounds, potency decreased under hypoxic conditions; in fact for 6, the half-maximal inhibitory concentration exceeded the highest tested concentration of 10 µM in four cell lines ( Figure 6, Table S3). As the exception, pentathiepin 1 showed no or potentiating effects regarding the cytotoxicity under hypoxia. In contrast to the other compounds, in some cell lines, the cytotoxic effects of 1 were more pronounced under oxygen-reduced incubation conditions (Figure 6), indicating a tangential role of oxygen availability for this pentathiepin. Especially for the breast cancer cell line MCF-7, the cytotoxicity increased substantially, indicated by a decrease of the IC 50 from >10.0 µM to about 1.0 µM. Furthermore, the availability of atmospheric oxygen during treatment did not alter the effect on cell viability in three of the cell lines, namely, the cisplatinresistant ovarian carcinoma cell line A2780cis and the lung carcinoma cell lines A427 and LCLC-103H.
To explore whether these observations were due to altered cell doubling rates under normoxic and hypoxic conditions, we determined the growth rates and subsequently the division times ( Figure S49a). With respect to this parameter, there was no difference between the culturing under normoxic and hypoxic conditions. Hence, the altered sensitivity toward the treatment with pentathiepins was independent of changes in cell doubling rates.
Another question that could be answered was whether the growth rates of the cell lines are determining factors for the cytotoxicity of the pentathiepins. Rapidly dividing cells are generally more susceptible to chemotherapeutic agents. Thus, a Pearson correlation analysis based on the doubling times and IC 50 or GI 50 values, respectively, was performed ( Figure S49b-d) and interpreted following statistics guidelines [36]. With regards to the inhibition of proliferation, a statistically significant moderate correlation with the doubling times was found for pentathiepins 4 (r = 0.65, p = 0.029) and 5 (r = 0.70, p = 0.017). The same analysis revealed a statistically significant correlation of the IC 50 values with the cell division time, namely, for compounds 3 (r = 0.70, p = 0.017), 4 (r = 0.72, p = 0.012), 5 (r = 0.73, p = 0.010), and 6 (r = 0.70, p = 0.016). These findings indicate that rapidly dividing cells are more sensitive toward the treatment with these pentathiepins. The fraction of shared variance between the two variables (R 2 ) ranged from 43 to 54%, meaning that about half of the effect can be related to the division time.
Consistent with previous publications, the high cytotoxicity of the pentathiepins was confirmed across the 14-cell panel of different tissue origins. In summary, pentathiepins 1-6 had IC 50 values in the low-or even sub-micromolar range, independent from their potential to inhibit the GPx1. This indicates that the inhibition of the GPx1 is not the exclusive reason for the high cytotoxicity. However, compounds with a higher IC 50 in the GPx1 assay such as 2 and 6 were slightly less cytotoxic in both MTT and crystal violet assays under normoxic incubation conditions. Generally, the cell lines presented slightly different sensitivities toward the treatment with pentathiepins. Especially those from pancreatic or urinary bladder carcinoma had IC 50 values that were above the overall cell line average, although viability and proliferation were strongly decreased. This underlines the necessity of testing multiple cell lines from different origin to obtain a better impression of the biological effect caused by a compound. We also found a moderate-to-high correlation between the cytotoxicity of the pentathiepins and the doubling time of the cells. This indicates that, in particular, rapidly dividing cells, such as cancer cells, were targeted.
When comparing the cytotoxic potency of the studied pentathiepins with those of well-known and widely applied anticancer agents tested previously under the same assay conditions [37], we found comparable results; in fact, they even exceeded the effects of clinically used anticancer drugs carboplatin, thiothepa, hydroxyurea, and busulfan in the same cell lines.
GPx1 and Catalase Expression in Selected Cancer Cell Lines
As the pentathiepins targeted the GPx1 in vitro, it was important to assess whether toxicity of the compounds was related to cellular protein levels of GPx1. The selection of the cell lines was based on their expression profile of GPx1. Moreover, HAP-1, HAP-1.KO.GPx1, and A2780 were very sensitive towards treatment with pentathiepins, whereas Siso is highly susceptible to several chemotherapeutic agents [37]. In addition, three pancreatic cancer cell lines were also used in these investigations. The expression of GPx1 and that of catalase as another H 2 O 2 -detoxifying enzyme in the seven cell lines was quantified by Western blotting and normalized to total protein load per lane by using the TGX stain-free gel system from Bio-Rad [38][39][40]. Figure 7 shows the semi-quantitative analysis of the Western blots, expressed as the relative protein expression across the seven cell lines. The cell lines with the highest expression of GPx1 were HAP-1 and Siso. As expected, the knockout cell line HAP-1.KO.GPx1 expressed no detectable enzyme, while A2780 had very low levels.
The two pancreatic cancer cell lines DanG and YAPC contained equal amounts of the enzyme, while PATU-8902 had an expression as low as A2780. With regards to catalase, the parental HAP-1 and the corresponding GPx1 knockout cell line had similar enzyme levels, while A2780 and Siso had the lowest of the panel. Among the three pancreatic cancer cell lines, DanG had the highest expression of catalase, and PATU-8902 and YAPC had similarly low amounts. Above-average GPx1 levels were detected for HAP-1 and Siso cells, which did not correlate with their sensitivity toward the pentathiepins. In particular, the fact that both HAP-1 and the GPx1-negative knockout variant responded similarly throughout the biological assays indicated a negligible impact of GPx1 expression on the cytotoxic mechanism of pentathiepins. Previous studies already compared these two cell lines with regards to their metabolism and antioxidant capacity, including several enzymes and intracellular GSH levels [22]. Additionally, we detected that catalase was most abundant in HAP-1, HAP-1.KO.GPx1, and the pancreatic cancer cell line DanG. Hence, catalase expression did not seem to have any impact on the sensitivity toward pentathiepins either.
Generation of Intracellular ROS
According to the hypothesized mechanism of action (see Figure 3), pentathiepins can create oxidative stress via generation of reactive sulfur precursors. We have previously reported that a representative pentathiepin can increase the intracellular levels of ROS in two cancer cell lines [12]. This effect was thus further evaluated with the new compounds in seven cell lines ( Figure 8). The cells were incubated for 15 min with 25.0 µM pen- Above-average GPx1 levels were detected for HAP-1 and Siso cells, which did not correlate with their sensitivity toward the pentathiepins. In particular, the fact that both HAP-1 and the GPx1-negative knockout variant responded similarly throughout the biological assays indicated a negligible impact of GPx1 expression on the cytotoxic mechanism of pentathiepins. Previous studies already compared these two cell lines with regards to their metabolism and antioxidant capacity, including several enzymes and intracellular GSH levels [22]. Additionally, we detected that catalase was most abundant in HAP-1, HAP-1.KO.GPx1, and the pancreatic cancer cell line DanG. Hence, catalase expression did not seem to have any impact on the sensitivity toward pentathiepins either.
Generation of Intracellular ROS
According to the hypothesized mechanism of action (see Figure 3), pentathiepins can create oxidative stress via generation of reactive sulfur precursors. We have previously reported that a representative pentathiepin can increase the intracellular levels of ROS in two cancer cell lines [12]. This effect was thus further evaluated with the new compounds in seven cell lines ( Figure 8). The cells were incubated for 15 min with 25.0 µM pentathiepin and subsequently analyzed by using the intracellular ROS sensor 2 ,7 -dichlorodihydrofluorescein diacetate (DCFDA) by flow cytometry. Treatment with pentathiepins 2, 3, 4, and 5 resulted in a burst of ROS in all cell lines, in some cases even exceeding the effect of the positive control with H2O2 (2 mM) ( Figure 8). Interestingly, 1 did not change the intracellular ROS levels in any of the cell lines and 6 only in three of the panel, including HAP-1, the corresponding GPx1-knockout line and A2780. In HAP-1 and HAP-1.KO.GPx1, the increase of ROS after treatment with pentathiepins was about two-to fourfold compared with the solvent control sample. In the Siso line, the rise in ROS levels ranged between 5-and 10-fold, while in A2780, it was roughly doubled or tripled. In the pancreatic cancer cell lines DanG and PATU-8902, high levels of intracellular ROS were detected after treatment with 2, 3, 4, and 5, while in YAPC, the only increase was monitored after incubation with 5. In the latter cell line, a very small response was observed in general, as there was no significant reaction toward the H2O2 treatment either.
As thiols are supposedly needed for the activation of the pentathiepins [24], the influence of additional glutathione added to the culture medium during treatment on the intracellular ROS levels was assessed. In these experiments, 3 or 30 µM of GSH were added to the medium containing 25 µM of pentathiepin or solvent, respectively ( Figure 9). Data are displayed as mean and SD, and a representative histogram from flow cytometric analysis is included (bottom right). All treatment conditions were related to the negative control sample (set to 1, dashed line) and statistical analysis performed in Prism 7 by one-way ANOVA and Dunnett's multiple comparisons post hoc test. n ≥ 3 independent experiments, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001. Treatment with pentathiepins 2, 3, 4, and 5 resulted in a burst of ROS in all cell lines, in some cases even exceeding the effect of the positive control with H 2 O 2 (2 mM) ( Figure 8). Interestingly, 1 did not change the intracellular ROS levels in any of the cell lines and 6 only in three of the panel, including HAP-1, the corresponding GPx1-knockout line and A2780. In HAP-1 and HAP-1.KO.GPx1, the increase of ROS after treatment with pentathiepins was about two-to fourfold compared with the solvent control sample. In the Siso line, the rise in ROS levels ranged between 5-and 10-fold, while in A2780, it was roughly doubled or tripled. In the pancreatic cancer cell lines DanG and PATU-8902, high levels of intracellular ROS were detected after treatment with 2, 3, 4, and 5, while in YAPC, the only increase was monitored after incubation with 5. In the latter cell line, a very small response was observed in general, as there was no significant reaction toward the H 2 O 2 treatment either.
As thiols are supposedly needed for the activation of the pentathiepins [24], the influence of additional glutathione added to the culture medium during treatment on the intracellular ROS levels was assessed. In these experiments, 3 or 30 µM of GSH were added to the medium containing 25 µM of pentathiepin or solvent, respectively ( Figure 9). Figure 9. Relative intracellular levels of reactive oxygen species after an incubation for 15 min with pentathiepins (25 µM) omitting or adding GSH to the treatment medium detected via flow cytometric DCFDA-based assay in HAP-1 and HAP-1.KO.GPx1. Solvent only was used as negative control (set to 1, dashed line) and treatment conditions with either no, 3, or 30 µM of GSH were related to it. Data are displayed as mean and SD, and statistical analysis was performed in Prism 7 by one-way ANOVA and Dunnett's multiple comparisons post hoc test. n ≥ 3 independent experiments, * p < 0.05, ** p < 0.01, *** p < 0.001. A significant increase of intracellular ROS caused by pentathiepins 2 and 5 occurred after supplementing culture medium with glutathione, roughly doubling the effect of the compound alone. For 4, an ascending trend was observed but not determined as statistically significant. On the contrary, compounds 3 and 6 showed a tendency toward decreased oxidative stress when GSH was added. Although not statistically significant, additional GSH halved the ROS-stimulating effect of the compound only. No change was observed for pentathiepin 1, neither for treatment with omitted GSH (compare Figure 8) nor in a co-incubation.
To investigate the long-term influence of treatment with pentathiepins on the ROS levels, we incubated the cells with the compounds 2 and 3 for 24 and 48 h (Figure 10a,b). Pentathiepin 2 was selected due to its probable potentiation by GSH and 3 because of its high potential to quickly induce ROS (see Figures 8 and 9). In addition, this incubation was performed under both normoxic (19%) and hypoxic (1%) conditions to assess the impact of available oxygen (Figure 10a,b). A significant increase of intracellular ROS caused by pentathiepins 2 and 5 occurred after supplementing culture medium with glutathione, roughly doubling the effect of the compound alone. For 4, an ascending trend was observed but not determined as statistically significant. On the contrary, compounds 3 and 6 showed a tendency toward decreased oxidative stress when GSH was added. Although not statistically significant, additional GSH halved the ROS-stimulating effect of the compound only. No change was observed for pentathiepin 1, neither for treatment with omitted GSH (compare Figure 8) nor in a co-incubation.
To investigate the long-term influence of treatment with pentathiepins on the ROS levels, we incubated the cells with the compounds 2 and 3 for 24 and 48 h (Figure 10a,b). Pentathiepin 2 was selected due to its probable potentiation by GSH and 3 because of its high potential to quickly induce ROS (see Figures 8 and 9). In addition, this incubation was performed under both normoxic (19%) and hypoxic (1%) conditions to assess the impact of available oxygen (Figure 10a,b).
Under normoxic conditions, this long-term treatment resulted in no significant differences of cells incubated with pentathiepins relative to the solvent control cells. In A2780, trends were observed for both 2 and 3 that ROS levels increased with incubation time. In contrast, under hypoxic conditions, significant differences were detected in the Siso cell line, wherein the amount of ROS decreased by half for both representative pentathiepins on either time point.
The finding that pentathiepins increased intracellular ROS levels confirmed the results of previous studies, where it was postulated that reactive sulfur intermediates are released in the presence of a thiol [12,24]. The fluorescent pyrrolo-pyrazine containing compound 1 did not result in a boost of ROS in any of the tested cell lines, while all others did. The pfluorobenzyolpiperazine (5) had the strongest effect, whereas the attachment of a p-tosyl in 6 led to a reduction. This might in part explain the lower cytotoxicity of 6, but it contradicts the effects mediated by 1. All three nicotinamide derivatives (2, 3, 4) gave similar results, irrespective of their terminal functional group. This taken together suggests that not only is the pentathiepin ring responsible for the induction of oxidative stress in the cells, but that the nature of the N-heterocycle also plays a critical role.
The induction of intracellular ROS is a common response to anticancer treatment, eventually resulting in cellular death due to oxidative stress. A predominant target for ROS-mediated damage is cellular DNA. Under normoxic conditions, this long-term treatment resulted in no significa ferences of cells incubated with pentathiepins relative to the solvent control ce A2780, trends were observed for both 2 and 3 that ROS levels increased with incu time. In contrast, under hypoxic conditions, significant differences were detected Siso cell line, wherein the amount of ROS decreased by half for both representativ tathiepins on either time point.
The finding that pentathiepins increased intracellular ROS levels confirmed sults of previous studies, where it was postulated that reactive sulfur intermedia released in the presence of a thiol [12,24]. The fluorescent pyrrolo-pyrazine cont compound 1 did not result in a boost of ROS in any of the tested cell lines, while all did. The p-fluorobenzyolpiperazine (5) had the strongest effect, whereas the attac of a p-tosyl in 6 led to a reduction. This might in part explain the lower cytotoxicit but it contradicts the effects mediated by 1. All three nicotinamide derivatives (2 gave similar results, irrespective of their terminal functional group. This taken to suggests that not only is the pentathiepin ring responsible for the induction of oxi stress in the cells, but that the nature of the N-heterocycle also plays a critical role.
The induction of intracellular ROS is a common response to anticancer trea eventually resulting in cellular death due to oxidative stress. A predominant tar ROS-mediated damage is cellular DNA.
Induction of DNA Strand Breaks
We pursued two different strategies to assess the potential of the pentathiep damage DNA: a simple plasmid cleavage assay and the cellular Comet assay. The f is a straightforward method to investigate whether a test compound can cleave D vitro, and if so, whether single and/or double strand breaks are created, while the l
Induction of DNA Strand Breaks
We pursued two different strategies to assess the potential of the pentathiepins to damage DNA: a simple plasmid cleavage assay and the cellular Comet assay. The former is a straightforward method to investigate whether a test compound can cleave DNA in vitro, and if so, whether single and/or double strand breaks are created, while the latter is an ex vivo assay that measures DNA fragmentation in single cells. In the plasmid cleavage assay, a single strand break would relax the supercoiled plasmid (scDNA) and lead to the detection of open circular DNA (ocDNA), whereas a double-strand break would linearize the plasmid (linDNA). An agarose gel electrophoresis enables the discrimination of the distinct plasmid conformations due to their specific electrophoretic mobilities (Figure 11a).
It was found that the pentathiepins 1-6 caused plasmid DNA cleavage with statistically significantly different extents (Figure 11a,c). Compounds 3, 4, and 5 induced similar levels of single-strand breaks with roughly 55% at 5 µM and about 75% at 25 µM, while 2, 6, and 1 showed decreasing potential to damage DNA. The magnitude of cleavage is not only dependent on the presence of a thiol, in our case GSH, but also on the concentration of the pentathiepin (Figure 11a). A concentration of 25 µM resulted in a significantly higher percentage of open circular plasmid DNA than 5 µM, with a difference of about 15-30%. This is the first time that the formation of double strand breaks by pentathiepins has been described ( Figure 11b). This effect was detected for pentathiepins 2, 3, 4, and 5 after incubation of the plasmid DNA with 25 µM compound, which resulted in about 4% linearized plasmid, corresponding to DNA with double-strand breaks.
Further experiments were conducted with selected pentathiepins to establish some of the conditions that may promote the cleavage effect. The investigations covered different GSH concentrations (Figure 11e) and pH of the buffer system (Figure 11f), which have both been described as modifiers of pentathiepin-mediated damage of plasmid DNA [6,41].
an ex vivo assay that measures DNA fragmentation in single cells. In the plasmid cleavage assay, a single strand break would relax the supercoiled plasmid (scDNA) and lead to the detection of open circular DNA (ocDNA), whereas a double-strand break would linearize the plasmid (linDNA). An agarose gel electrophoresis enables the discrimination of the distinct plasmid conformations due to their specific electrophoretic mobilities (Figure 11a). First, we investigated whether there is an optimal pentathiepin-thiol ratio by incubating plasmid DNA with 5 µM of the most active compound 3 and GSH concentrations between 1.0 µM and 10.0 mM (Figure 11e). The highest amount of ocDNA was detected in a GSH concentration range between 250 µM and 4.0 mM, corresponding to 50 to 800 equivalents of GSH per pentathiepin. The minimum amount of GSH that resulted in statistically significant plasmid cleavage was 25.0 µM, equal to a 1:5 ratio of pentathiepin to GSH.
In addition, the pH of the reaction buffer was modified to investigate the influence of this factor on the DNA cleavage potential of two different pentathiepins (Figure 11f). Pentathiepin 1 was selected as the least, and 3 as the most active plasmid damaging compound and the assay performed at pH 5.1, 6.1, 7.1, and 8.1 with 5 µM of each compound. For 1, a trend favoring lower pH for higher cleavage effects was observed but was not proven to be statistically significant. However, pentathiepin 3 led to different relative amounts of ocDNA, with the highest observed at pH 7.1 (55%), 6.1 (49%), and 5.1 (45%). A remarkable decrease in cleaving was measured for pH 8.1 (28%) with about half of the amount of ocDNA detected than at pH 7.1.
To further investigate the plasmid cleaving mechanism, we repeated this experiment with pentathiepin 3, and CAT or SOD were added (Figure 11d). Both antioxidative enzymes play a role in detoxifying DNA-damaging radicals such as H 2 O 2 and superoxide, respectively [41]. As shown in Figure 11d, pentathiepin 3 significantly increased the fraction of damaged DNA to about 43% in comparison with the solvent control (7%). Supplementation with the anti-oxidative enzymes CAT or SOD reduced the amount of broken plasmid to 19% and 29%, respectively, while a combination of both enzymes did not exceed the effect of CAT alone (19%). Thus, both enzymes exerted a protective role but could not fully compensate for the damaging effect of the pentathiepin at the catalytic concentrations used.
The current findings that pentathiepins 1-6 cause cleavage of plasmid DNA by inducing both single and double strand breaks is consistent with our previous findings for another pentathiepin [12] and supports the theoretical postulated mechanism of action [24]. The reaction is dependent on a certain ratio of pentathiepin and GSH with a minimum ratio of 1:5. We can also associate the potential to induce oxidative stress with the ability to damage DNA, as the ROS boosting compounds 2, 3, 4, and 5 were also the best DNA cleaving agents.
While previous publications mentioned an acidic environment as beneficial for cleavage effectiveness of varacin or the trithiol compound varacin C [41,42], we could not confirm this for pentathiepins 1-6.
Damage of Genomic DNA
Next, the potential of the pentathiepins 1-6 to damage nuclear DNA was investigated in the cell lines HAP-1, HAP-1.KO.GPx1, and Siso, with different expression levels of GPx1 and CAT (see Figure 7), by Comet Assay (Figure 12) with representative images in the Supplementary Materials ( Figure S48).
All tested compounds damaged the DNA in each of the investigated cell lines at 0 • C but to different extents, while more than 95% of DNA remained intact in the negative controls. In HAP-1 and the corresponding GPx1-knockout cell line (Figure 12a,b), the pentathiepins had similar effects, with 6 being the least DNA-damaging agent. Interestingly, the effect of 1 was comparable to the other pentathiepins, causing damage levels of 70-80%. After treatment with 2, 3, 4, or 5, only 5 to 25% of genomic DNA remained intact. In Siso cells (Figure 12c), the most active agents were 3, 4, and 5, followed by 2, decreasing the amount of intact DNA to 12-25% and 30%, respectively. Pentathiepins 1 and 6 resulted in similar damage, leaving about 70-80% of DNA unbroken. This was a significant difference when compared with the more active pentathiepins 2, 3, 4, and 5.
As expected, the positive control with 20 µM H 2 O 2 resulted in higher comet formation in GPx1-knockout cells, with 5% intact nuclear DNA compared to the parental cell line HAP-1 with 63% undamaged nuclear DNA (Figure 12d). This is consistent with the idea that GPx1 protects cells from DNA damage caused by H 2 O 2 . Surprisingly, for Siso, the cell line with the highest GPx1 expression, nuclear DNA damage was just as high as in the knockout cell line, with about 10% intact DNA remaining in the nucleus.
2.2.6. Damage of Genomic DNA Next, the potential of the pentathiepins 1-6 to damage nuclear DNA was investigated in the cell lines HAP-1, HAP-1.KO.GPx1, and Siso, with different expression levels of GPx1 and CAT (see Figure 7), by Comet Assay (Figure 12) with representative images in the Supplementary Materials ( Figure S48). All tested compounds damaged the DNA in each of the investigated cell lines at 0 °C but to different extents, while more than 95% of DNA remained intact in the negative controls. In HAP-1 and the corresponding GPx1-knockout cell line (Figure 12a,b), the pentathiepins had similar effects, with 6 being the least DNA-damaging agent. Interestingly, The findings of the Comet assay are to the most part consistent with the results obtained in the plasmid cleavage experiments. Pentathiepin 6 was the least damaging agent, and the level of DNA damage was similar for compounds 2, 3, 4, and 5 in both assays. An exception was found for 1, which was the least damaging compound in the plasmid assay but resulted in relatively high damage of genomic DNA in HAP-1 and HAP-1.KO.GPx1. Together with the fact that 1 did not result in oxidative stress in any cell line, this indicates a different mechanism of action.
With regards to the role of GPx1, it can be stated that the presence of this enzyme plays only a minor role if any in the cytotoxic activity of pentathiepins, considering the fact that both HAP-1 cell lines react similarly to the treatment with these compounds. The observed strong response of the Siso cell line in the positive control setting can be explained by the comparably low expression of CAT, which is more effective in decomposing high concentrations of H 2 O 2 than GPx1.
Intracellular Distribution of Pentathiepin 1
One of the investigated pentathiepins was specifically designed to allow intracellular tracking via fluorescence microscopy, namely, the aryl-conjugated pyrrolo-pyrazinefused compound 1. As we detected a high potential to damage DNA, we wanted to assess whether pentathiepins accumulate in the nucleus. Exemplarily, Siso cells were incubated with pentathiepin 1 and subsequently monitored under the fluorescent microscope ( Figure 13). a different mechanism of action.
With regards to the role of GPx1, it can be stated that the presence of this enzyme plays only a minor role if any in the cytotoxic activity of pentathiepins, considering the fact that both HAP-1 cell lines react similarly to the treatment with these compounds. The observed strong response of the Siso cell line in the positive control setting can be explained by the comparably low expression of CAT, which is more effective in decomposing high concentrations of H2O2 than GPx1.
Intracellular Distribution of Pentathiepin 1
One of the investigated pentathiepins was specifically designed to allow intracellular tracking via fluorescence microscopy, namely, the aryl-conjugated pyrrolo-pyrazinefused compound 1. As we detected a high potential to damage DNA, we wanted to assess whether pentathiepins accumulate in the nucleus. Exemplarily, Siso cells were incubated with pentathiepin 1 and subsequently monitored under the fluorescent microscope ( Figure 13). The DAPI (4 ,6-diamidino-2-phenylindole) channel (λ ex = 325-407 nm, λ em = 461 nm) was used to visualize the pentathiepin, while in the FITC (fluorescein isothiocyanate) channel (λ ex = 488 nm, λ em = 525 nm), the autofluorescence of the cells was detected.
The pentathiepin 1 exhibited fluorescent properties when excited; hence, it was feasible to examine the distribution in cells. As apparent in Figure 13, the compound was dispersed within the cytoplasm and did not accumulate in any cellular compartment. Importantly, no accumulation in the nucleus took place, suggesting that nuclear DNA may not be the target for this pentathiepin. This is in accordance with the inferior nuclear DNA cleavage capacity of this compound in contrast to the pentathiepins 2-5 in the Siso cell line (Figure 12c).
Induction of Apoptosis
As reported in our earlier publication, a representative pentathiepin was capable of inducing apoptosis in cancer cell lines [12]. Here, we tested compounds 1-6 for their ability to trigger this particular modality of cell death in a subset of four cell lines: HAP-1, HAP-1.KO.GPx1, A2780, and Siso. A flow cytometric assay was performed, allowing us to discriminate between viable, early, and late apoptotic cells (representative dot plots can be found in Figure S47). FITC-labeled annexin V was used to assess the externalization of phosphatidyl serine as a hallmark of early apoptosis induction [43]. The addition of propidium iodide (PI) enabled the detection of late apoptotic cells as it only permeates porous cell membranes and exerts fluorescence once bound to double-stranded DNA. The cells were treated with the pentathiepins (IC 90 ) for 24 h and were subsequently analyzed ( Figure 14). HAP-1.KO.GPx1, A2780, and Siso. A flow cytometric assay was performed, allowing us to discriminate between viable, early, and late apoptotic cells (representative dot plots can be found in Figure S47). FITC-labeled annexin V was used to assess the externalization of phosphatidyl serine as a hallmark of early apoptosis induction [43]. The addition of propidium iodide (PI) enabled the detection of late apoptotic cells as it only permeates porous cell membranes and exerts fluorescence once bound to double-stranded DNA. The cells were treated with the pentathiepins (IC90) for 24 h and were subsequently analyzed ( Figure 14). All new pentathiepins induced apoptosis, although to different extents and in a cell line-dependent manner. In HAP-1 cells, all pentathiepins except for 1 resulted in increased fractions of early apoptotic cells up to 23%, which was comparable to the doxorubicintreated positive control. Compound 3 additionally increased the percentage of late apoptotic cells up to 12%. In the corresponding GPx1-knockout cell line, the effect of doxorubicin was higher (about 50%), whereas only two of the pentathiepins induced apoptosis, namely, 3 and 5. The fractions of early apoptotic cells were 34 and 15%, respectively, and the population of late apoptotic cells 6% for 3. In the ovarian carcinoma cell line A2780 the overall effects were marginal, but all compounds except 6 increased the percentage of early apoptotic cells to 6-14%, with 3 resulting in the highest amount. In the Siso cell line, pentathiepin 3 increased both the number of early and late apoptotic cells to about 20%, which exceeded the effects of doxorubicine (0.5 µM). These findings were further corroborated by the observation of morphological features such as membrane blebbing and cell shrinkage as particular hallmarks of apoptosis ( Figure S51).
For representative pentathiepin 3 as the most effective with regards to the induction of apoptosis, the effect was examined in a time-dependent manner at 6, 24, and 48 h ( Figure 15).
In Siso cells, apoptosis was visible already after 6 h, while in HAP-1, HAP-1.KO.GPx1, and A2780, first effects were apparent after 24 h. Moreover, there was no increase of apoptotic cells detected between the incubation periods of 24 to 48 h. Hence, for further investigations, we focused on the time frame of 24 h.
The underlying mechanism of cell death for the pentathiepins was analyzed in order to explain their high cytotoxicity that was determined in the MTT and crystal violet assay. The induction of apoptosis was already described in our previous publication [12] and is confirmed herein for the new set of pentathiepins.
which exceeded the effects of doxorubicine (0.5 µM). These findings were further corroborated by the observation of morphological features such as membrane blebbing and cell shrinkage as particular hallmarks of apoptosis ( Figure S51).
For representative pentathiepin 3 as the most effective with regards to the induction of apoptosis, the effect was examined in a time-dependent manner at 6, 24, and 48 h ( Figure 15). To further corroborate this mode of cell death, we performed a luminescent caspase assay by measuring caspase-3 (cas-3) and caspase-7 (cas-7) activities ( Figure 16). Both are executioner caspases that are present as zymogens (35 kDa) and become active after being cleaved (fragment sizes 17 and 19 kDa) by initiator caspases. Activated cas-3 and -7 in turn have several other substrates, including PARP1. In this assay, a precursor with a caspasespecific target sequence was converted into its luminescent form by active caspase-3 or -7, thereby emitting light that can be quantified.
An activation of the caspase-3 and -7 was confirmed for the positive control with doxorubicin (0.5 µM) in all four cell lines ( Figure 16). In HAP-1, the treatment with pentathiepins resulted in elevated caspase activity, but statistically significant results were only obtained for 2, 4, and 6. Similarly, in the corresponding knockout cell line, all compounds increased the activity of caspases in relation to the solvent control. However, these findings were only statistically significant for the pentathiepins 1, 2, 3, and 6. In the cell lines A2780 and Siso, none of the compounds provoked the activation of caspase-3 or -7. Unexpectedly, for the Siso line, a decrease trend was detected instead.
In addition to general flow cytometric and microscopic analyses, Western blotting was performed to detect the apoptosis-relevant protein PARP1 (poly(ADP-ribose) polymerase 1) and its cleavage fragment ( Figure 17 and Figures S29-S34). Full-length PARP1 (116 kDa) is involved in DNA repair, and a cleavage by caspases (fragment sizes 24 and 89 kDa) caused by caspase-3 and -7 is a hallmark of late apoptosis. assay by measuring caspase-3 (cas-3) and caspase-7 (cas-7) activities ( Figure 16). Both executioner caspases that are present as zymogens (35 kDa) and become active after b cleaved (fragment sizes 17 and 19 kDa) by initiator caspases. Activated cas-3 and -7 in have several other substrates, including PARP1. In this assay, a precursor with a casp specific target sequence was converted into its luminescent form by active caspase-3 7, thereby emitting light that can be quantified. An activation of the caspase-3 and -7 was confirmed for the positive control doxorubicin (0.5 µM) in all four cell lines ( Figure 16). In HAP-1, the treatment with tathiepins resulted in elevated caspase activity, but statistically significant results w only obtained for 2, 4, and 6. Similarly, in the corresponding knockout cell line, all c pounds increased the activity of caspases in relation to the solvent control. However, t findings were only statistically significant for the pentathiepins 1, 2, 3, and 6. In the lines A2780 and Siso, none of the compounds provoked the activation of caspase-3 o Unexpectedly, for the Siso line, a decrease trend was detected instead.
In addition to general flow cytometric and microscopic analyses, Western blot was performed to detect the apoptosis-relevant protein PARP1 (poly(ADP-ribose) p merase 1) and its cleavage fragment ( Figure 17 and Figures S29-S34). Full-length PA (116 kDa) is involved in DNA repair, and a cleavage by caspases (fragment sizes 24 89 kDa) caused by caspase-3 and -7 is a hallmark of late apoptosis.
Here, the cells were treated for 24 h with the respective pentathiepins (IC90), and tein lysates were prepared and subsequently subjected to SDS-PAGE and Wes Here, the cells were treated for 24 h with the respective pentathiepins (IC 90 ), and protein lysates were prepared and subsequently subjected to SDS-PAGE and Western blotting to quantify the cleavage of PARP1. All detected specific bands were normalized to the total protein load per lane by the TGX stain-free gel system from Bio-Rad [38][39][40].
Cleaved PARP1 was detected to statistically significant extents in both HAP-1 and HAP-1.KO.GPx1 after the treatment with the positive control doxorubicin. Incubation with 3 or 5 resulted in trends for increased levels. However, only the treatment with 6 caused relevant and significant quantities of cleaved PARP1. In A2780 and Siso cells, the fraction of 89 kDa PARP1 was very small and only confirmed after treatment with pentathiepins 3 or 5. These results appear consistent with the previous caspase-3/-7 studies.
No Evidence for Ferroptosis
Not only apoptosis may contribute to the mainly ROS-based cytotoxicity of the pentathiepins but also ferroptosis, which derives from iron-dependent oxidative damage of the cell membrane [15]. This form of programmed cell death is initiated via lipid peroxidation, which is normally held in check by the membrane-bound GPx4, an isoform of GPx1. As pentathiepins are potent GPx1 inhibitors, we indirectly investigated the possible role of GPx4 inhibition by precluding ferroptosis. Here, cells were incubated with a serial dilution of the respective pentathiepin, and either ferrostatin-1 (Fer-1) or vehicle control (DMSO) was added; Fer-1 is an established inhibitor of ferroptosis [15,44]. The cell viability was measured after 48 h, and the activity of the compounds after adding or omitting Fer-1 was compared ( Figure S50).
If the compounds induced ferroptosis, the addition of Fer-1 would decrease the cytotoxicity compared to the control population, i.e., the resulting IC 50 would be higher than that of pentathiepin-treated cells without the additive [45]. However, no effect on the IC 50 of the pentathiepins was observed when a co-incubation with Fer-1 was performed in the four cell lines, indicating that the test compounds do not induce ferroptosis.
Influence on Cell Cycle Progression
The potent cytotoxicity and the potential to inhibit the proliferation of cancer cells accompanied by DNA damage effects raised the question as to whether cell cycle progression might be affected as a result of treatment with pentathiepins. To answer this, the four cell lines HAP-1, HAP-1.KO.GPx1, A2780, and Siso (Figure 18), as well as the three pancreatic cancer cell lines DanG, PATU-8902, and YAPC ( Figure 19), were treated with the test compounds at the IC 90 concentrations and incubated for 24 and 48 h, followed by quantifying the relative numbers of cells in the different phases of the cell cycle (i.e., G 0 /G 1 , S and G 2 /M) as well as cell debris (i.e., sub G 1 ) by flow cytometry.
For both HAP-1 and HAP-1.KO.GPx1 (Figure 18a,b), the most pronounced effects were measured after 24 h as a result of treatment with 2, 3, 4, and 5; mainly with increased amounts of cells in the sub G 1 -phase and G 2 /M-phase by up to 10 and 30%, respectively, while less cells were in the G 0 /G 1 -phase. No such effects were detected after treatment with either 1 or 6. After 48 h, changes from the cell cycle phase distribution of treated cells compared with negative control cells were observed for 2 and 3, displaying the same trend as after 24 h, but to a lesser extent regarding the G 2 /M-arrest.
In A2780 and Siso cells (Figure 18c,d), differences from a negative control were detected after treatment with all pentathiepins except for 1. In most cases, changes appeared after 24 h, including decreased fractions of G 0 /G 1 -phase cells by up to 22% and enhanced amounts of cells in the G 2 /M-phase up to 27%. An exception was after incubation with 6, where after 48 h, a very high relative amount of sub G 1 -phase cells (30%) was measured in A2780 at the expense of the G 0 /G 1 -phase. In the Siso line, the effects were similar after 48 h when compared to the treatment for 24 h. accompanied by DNA damage effects raised the question as to whether cell cycle progression might be affected as a result of treatment with pentathiepins. To answer this, the four cell lines HAP-1, HAP-1.KO.GPx1, A2780, and Siso (Figure 18), as well as the three pancreatic cancer cell lines DanG, PATU-8902, and YAPC (Figure 19), were treated with the test compounds at the IC90 concentrations and incubated for 24 and 48 h, followed by quantifying the relative numbers of cells in the different phases of the cell cycle (i.e., G0/G1, S and G2/M) as well as cell debris (i.e., sub G1) by flow cytometry. Figure S46. The fractions of cells in either sub G 1 -, G 0 /G 1 -, S-, or G 2 /M-phase are displayed as mean and SD. Asterisks indicate differences between the solvent control and the treatment condition. Statistical analysis was performed in Prism 7 by one-way ANOVA and Dunnett's multiple comparisons post hoc test. n = 3 independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
We also investigated the influence of the pentathiepins on the cell cycle of the three pancreatic cancer cell lines DanG, PATU-8902, and YAPC, which turned out to be less sensitive than the other cell lines investigated ( Figure 19). Still, with values in the low micromolar range (Table S3), pentathiepins potently inhibited growth of these cancer cell lines. For DanG cells (Figure 19a), slight changes were observed for 2, 3, 4, and 5, which included an increase of cells in the G 2 /M-and sub G 1 -phase by 6 and 8%, respectively, with a simultaneous decrease of G 0 /G 1 -phase cells by roughly 15%. Pentathiepins 1 and 6 caused no differences from a normal cell cycle. In PATU-8902 cells (Figure 19b), a significant increase of S-phase cells up to 14% was additionally observed, especially when treated with compounds 3 and 4. Interestingly, in YAPC cells (Figure 19c) all six pentathiepins changed the distribution of the cell cycle phases . For 2, 3, 4, and 5, similar results were obtained compared with DanG and PATU-8902; however, remarkably, a very different cell cycle emerged after treatment with 1 and 6. Here, a significant increase of G 0 /G 1 -phase cells by up to 18% occurred with a concurrent reduction of cells in the S-phase by roughly 7-13%, indicating a G 0 /G 1 -arrest. In short, incubations with pentathiepin resulted in cell cycle aberrations with either a G 2 /M-arrest mediated by 2, 3, 4, and 5 or a G 0 /G 1 -arrest as result of 1 and 6, with the latter only occurring in the YAPC cell line. Figure S46. The fractions of cells in either sub G1-, G0/G1-, S-, or G2/M-phase are displayed as mean and SD. Asterisks indicate differences between the solvent control and the treatment condition. Statistical analysis was performed in Prism 7 by one-way ANOVA and Dunnett's multiple comparisons post hoc test. n = 3 independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
For both HAP-1 and HAP-1.KO.GPx1 (Figure 18a,b), the most pronounced effects were measured after 24 h as a result of treatment with 2, 3, 4, and 5; mainly with increased amounts of cells in the sub G1-phase and G2/M-phase by up to 10 and 30%, respectively, while less cells were in the G0/G1-phase. No such effects were detected after treatment with either 1 or 6. After 48 h, changes from the cell cycle phase distribution of treated cells compared with negative control cells were observed for 2 and 3, displaying the same trend as after 24 h, but to a lesser extent regarding the G2/M-arrest. Figure S46. The fractions of cells in either sub G 1 -, G 0 /G 1 -, S-, or G 2 /M-phase are displayed as mean and SD. Asterisks indicate differences between the solvent control and the treatment condition. Statistical analysis was performed in Prism 7 by one-way ANOVA and Dunnett's multiple comparisons post hoc test. n = 3 independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
The progression of the cell cycle is connected with the integrity of genomic DNA, which is monitored at two different checkpoints. Damage to DNA leads to arrests at the respective phase of the cell cycle, which either result in repair via the intracellular machinery or in cell death. The latter is a mechanism to prevent the propagation of cells with a defective genetic constitution. Therefore, the cell cycle aberrations caused by the pentathiepins may be due to their DNA-damaging potential. The damage of DNA is most probably induced by oxidative stress, with the exception of 1, which was the only pentathiepin that did not lead to a boost in ROS. For the anticancer agent bleomycin, which also causes single-and double-strand breaks in DNA through oxidative processes, similar G 2 /M arrests have been reported in cancer cell lines [46].
To elucidate the cell cycle-aberrating effect of 1, we considered its chemical structure more closely. Here, we recognized a structural similarity of the pyrrolo-pyrazine scaffold with those of the so called aloisines (6-phenyl[5H]pyrrolo[2,3-b]pyrazines). These compounds have an impact on the cell cycle, specifically G 1 and G 2 arrests, by inhibiting cyclin-dependent kinases (CDKs) [47]. This suggests that the scaffold structure rather than the pentathiepin ring system of 1 may induce the dominating effects on the cell cycle.
Structure-Activity Relationships
One aim of our study was to derive structure-activity relationships (SAR) from the biological evaluation, which may enable structural optimization of pentathiepins in the future. The six pentathiepins studied possess two distinct substructures: either a pyrrolopyrazine (1) or a nicotinamide scaffold (2-6) (Figure 2). Table 2 summarizes the SAR for the six new pentathiepins for 12 biological effects and compares them with those previously observed with compound (c). Apart from its fluorescent properties and an average potential to inhibit the GPx1, the backbone of pentathiepin 1 was accompanied by inferior biological activity. This especially entailed the induction of ROS and apoptosis (Figures 8 and 14), and to a lesser extent the DNA cleaving capability (Figures 11 and 12). However, the terminal methoxy group might act as hydrogen bond acceptor, which may explain the observed effects of this compound such as the inhibition of the GPx1 and the cytotoxic effects.
Taking into consideration the three nicotinamide-based pentathiepins 2, 3, and 4, we found that the terminal morpholine (3) and diethylamine (4) moieties caused superior biological effects compared to the piperidine (2). These observations comprise the inhibition of the GPx1, the cytotoxic and antiproliferative activity, the induction of ROS, apoptosis, and DNA strand breaks (Figures 5a, 6, 8, 11, 12 and 14). The diethylamine groups of 4 are likely to be more flexible than the closed piperidine of 2. The derivative 3, bearing another ring structure (morpholine), exhibited enhanced activity; the increased effects of this compound could be related to the morpholine as the oxygen offers free electron pairs for the interaction with putative targets by hydrogen bonding.
Pentathiepin 5, with the p-fluorophenone scaffold, showed biological activity comparable with 3, especially in the GPx1 assay and the induction of ROS or apoptosis (Figures 5a, 8 and 14). Again, the hydrogen bond accepting ability of the fluoro group may be responsible for this increase in activity.
The compound with the least biological activity was pentathiepin 6, containing a p-tosylpiperazine. This is surprising because the sulfonamide group is bioisosteric to the amide group, which is present in compound 5. Noticeably, this compound possesses an electron-rich aromatic system due to the electron-donating feature (+I effect) of the methyl substituent, but no ability to enter into hydrogen bonding. Moreover, the amide bond has a trigonal configuration, while the sulfonamide bond is tetrahedral. One or more of these structural properties could have resulted in the poor inhibitory potential towards the GPx1 and the lower induction of ROS and DNA damage (Figures 5a, 8, 11 and 12).
Compound (c) in Figure 1 from the first generation of pentathiepins of our previous study [12] showed biological activities similar to 6, especially regarding the cytotoxicity and the induction of ROS. Both compounds have terminal alkyl substituents causing a +I-effect. However, (c) inhibited the GPx1 in contrast to 6. This activity of (c) could be related to the additional cyclic nitrogen in the pyrrolo-quinoxaline backbone, which is not present in 6. This is supported by the fact that 1 is a pyrrolo-pyrazine derivative and also shows inhibitory activity towards GPx1. The investigated pentathiepins, in particular 3, 4, and 5, had a stronger biological activity than (c) with respect to all assays that were performed.
For future optimization, the piperazine scaffold is preferred over the pyrrolo-pyrazine backbone. Moreover, substituents with electronegative properties as well as providing a free electron pair, e.g., a fluorine or morpholine, may impart superior biological activity of the pentathiepins.
General Procedure for Sonogashira Cross-Coupling Reaction
A 25 mL oven-dried Schlenk tube was charged with 2 mol % of palladium (II) acetate (Pd(OAc) 2 ), 5 mol % of phosphine ligand (PPh 3 ) or PdCl 2 (PPh 3 ) 2 (2-3 mol %), 3 mol % copper(I) iodide (CuI), and 1-2 mmol of chloro-or bromo-heterocyclic derivative under nitrogen (N 2 ) atmosphere, and the resultant mixture was dissolved in 5 mL of dry ACN or DMF. The reaction mixture was stirred for 5 min and supplied with 2 equiv. of 3,3 -diethoxy propyne and 5 equiv. of TEA or DIPEA. This was followed by stirring at 60 • C for 3-4 h. After consumption of the starting materials (monitored by TLC/TLC-MS), the solvent was removed under vacuum, and the residue was purified by column chromatography in hexane/EtOAc (10% to 30%) solvent system to afford the desired product.
The general procedure for Sonogashira coupling was applied using C (0.50 g, 1.64 mmol) yielding D as red oil in 62% (0.357 g, 1.017 mmol) yield. 1 The general procedure for the Sonogashira reaction was applied using "X a" (1.94 g, 7.16 mmol), yielding the product as reddish-brown oil. The formation of "Y a" was confirmed by (+ve) APCI-MS m/z = 316. 18 . Without further purification, compound "Y a" was used in the next reaction.
3: (Y b) (6-(3,3-Diethoxypropynyl)pyridin-3-yl)(morpholino)methanone
Spectral data of the compound are in agreement with previous reports [48]. The general procedure for the Sonogashira reaction was applied using "X b" (1.00 g, 3.68 mmol) yielding the product as dark red liquid in 73% (0.85 g, 2.68 mmol) yield. 1 The spectral data of compound "Y c" are in agreement with the literature [48]. The general procedure for the Sonogashira reaction was applied using "X c" (1.00 g, 3.89 mmol) yielding the product as reddish oil in 78% (0.923 g, 3.03 mmol) yield. 1
5: (Y d) (4-(6-(3,3-Diethoxypropynyl)nicotinoyl)piperazinyl)(4-fluorophenyl)methanone
The general procedure for the Sonogashira coupling was applied using "X d" (0.39 g, 1.00 mmol), yielding the product as a red oil. The formation of "Y d" was confirmed by The general procedure for the Sonogashira reaction was applied using "X e" (0.425 g, 1.00 mmol), yielding the product as a red liquid. The formation of "Y e" was confirmed by The product was used further without additional purification.
Cell Lines and Culturing
The cell lines used for the investigations in this present study originate from lung carcinoma (A-427, LCLC-103H), pancreas carcinoma (DanG, PATU-8902, YAPC), esophageal carcinoma (Kyse-70), breast carcinoma (MCF-7), cervix adenocarcinoma (Siso), or urinary bladder carcinoma (RT-4, RT-112) and were obtained from Deutsche Sammlung von Mikroorganismen und Zellkultur (DSMZ; Braunschweig, Germany). The ovarian adenocarcinoma (A2780, A2780 cisplatin-resistant) cell lines were a gift of Dr. Julie A. Woods (Ninewells Hospital, University of Aberdeen, Aberdeen, UK). Additionally, two cell lines developed from the chronic myelogenious leukemia cell line KBM-7, namely, HAP-1 and the corresponding GPx1-knockout cell line HAP-1.KO.GPx1, were purchased from Horizon Discovery (Cambridge, UK). Unless otherwise stated, all cells were cultured under standard conditions at 37 • C and 5% CO 2 in a humidified atmosphere and grown in RPMI 1640 medium supplemented with 10% FCS and 1% penicillin/streptomycin, except for HAP-1 cells, both native and knockout, that were cultivated with IMDM medium containing 10% FCS, 1% penicillin/streptomycin, and 1% stable glutamine. As all cell lines were adherent, a trypsin/EDTA solution was used for detachment and harvesting. Cell lines were controlled every 6 months for mycoplasma by using the Mycoplasma Detection Kit from Lonza (Basel, Switzerland). Under standard culturing conditions, levels of atmospheric oxygen were about 19%, while for hypoxia experiments oxygen was reduced to 1% by gassing with nitrogen in a HeraCell 150i incubator (ThermoFisher Scientific, Waltham, MA, USA). The levels of dissolved oxygen in cell culture media were measured with Seven2Go TM pro from Mettler-Toledo (Greifensee, Switzerland).
GPx1 Enzyme Activity Assay
The potential of the pentathiepins to inhibit the GPx1 was assessed with an enzymatic assay as described previously [49]. It is based on the catalytic cycle of the enzyme coupled to an NADPH detection reaction. Here, a bovine erythrocyte GPx1 with a sequence similarity of 87% to the human homolog [50] was used for reasons of affordability. The GPx1 was treated with either vehicle (solvent DMF) or the respective pentathiepin, and the reaction started by addition of tert-butylhydroperoxide (t-BHP). For the assay, all solutions were prepared with potassium phosphate buffer (50 mM, pH 7.4, EDTA 1.1 mM, Triton-X 0.01%), except for the inhibitors and t-BHP, which were diluted in DMF or water, respectively. First, 180 µL of a 0.125 U/mL GPx1 solution and 30 µL of a serial dilution of the putative inhibitor dissolved in DMF were added in triplicate to the wells of a UV-transparent 96-well plate. As a negative control, only DMF was appliedat the same dilution as the inhibitor solution.. All reagents including the GPx1 solution were pipetted. Then, 30 µL of a GSH solution (2.5 mM) and 30 µL of a GR/NADPH solution (2 U/mL; 2.0 mM) were added per well. The reaction was started by adding 30 µL of a t-BHP solution (5.0 mM). Final concentrations were 0.075 U/mL bovine erythrocyte GPx1, 0.2 U/mL of GR, 0.25 mM of GSH, 0.2 mM of NADPH, 0.5 mM of t-BHP, and inhibitor concentrations between 0.05 and 12.5 µM. GPx1 activity was indirectly measured by monitoring the decrease of NADPH at λ = 340 nm for 30 min every 15 s at room temperature. The GPx1 activity of the treated samples was related to the solvent treated control and the relative activities analyzed in Prism 7 (GraphPad) via dose-response graphs and the calculation of the inflection point corresponding to the relative IC 50 (half-maximal inhibitory concentration). The relative IC 50 corresponds to the concentration at which half of the effect of the test compound is achieved. An absolute IC 50 , where 50% of the enzyme is inhibited, was calculated via interpolation of the sigmoidal graph.
Enzymatic Assays for the Activity of Glutathione Reductase and Catalase
Off-target assays were conducted to assess the specificity of the pentathiepins for inhibiting the GPx1; these were the activities of GR and CAT (from bovine liver) that have previously been described [49,51]. As the baker's yeast GR is an important component in the GPx assay, its inhibition by pentathiepins had to be excluded. A fixed pentathiepin concentration of 25 µM was tested in triplicate, following the same principle as mentioned before, by monitoring the consumption of NADPH by the GR at λ = 340 nm while GSSG is reduced back to GSH. All assay components were prepared in potassium phosphate buffer (50 mM, pH 7.4, 1.1 mM EDTA, 0.01% Triton-X solution), except for the inhibitor that was diluted in DMF. Subsequently, 180 µL of buffer was added per well followed by 30 µL of a 2.0 U/mL GR solution, 30 µL of a 2.0 mM NADPH solution, and 30 µL of a 250 µM pentathiepin solution or 30 µL of DMF as negative control. Finally, to start the reaction, 30 µL of a 2.5 mM GSSG solution was added, resulting in final concentrations of 0.2 U/mL of GR, 0.2 mM of NADPH, 25 µM of pentathiepin, and 0.25 mM of GSSG. After measuring the decrease of NADPH every 15 s for 30 min at room temperature, the relative inhibition of the GR was calculated by relating the activity of the pentathiepin-treated samples to the negative control.
The CAT activity assay is based on the principle that in the presence of H 2 O 2 and heat, and dichromate in acetic acid is reduced to chromic acetate via perchromic acid as unstable intermediate. Here, the amount of chromic acetate is measured twice, once directly after adding H 2 O 2 (t 0min ) and once again after the CAT (bovine origin) split the H 2 O 2 for a period of 10 min (t 10min ) in the presence or absence of a putative inhibitor. Samples in this assay included a blank (+CAT/+DMF/−H 2 O 2 ), a H 2 O 2 standard (−CAT/+DMF/+H 2 O 2 ), a negative control (+CAT/+DMF/+H 2 O 2 ), and the test conditions (+CAT/+25 µM pentathiepin/+H 2 O 2 ). For each condition, 3.0 mL of 0.002 mg/mL CAT in potassium phosphate buffer (50 mM, pH 7.4, EDTA 1.1 mM, Triton-X 0.01%) were prepared in a glass tube. Additionally, two tubes were prepared per condition each containing 2 mL of a reagent containing 1 part of an aqueous solution of K 2 Cr 2 O 7 (5%) and 2 parts glacial acetic acid. Subsequently, the solvent or inhibitors were added, and the tubes placed on a shaker for 5 min. To start the reaction, 61.8 µL of a 30% H 2 O 2 solution was added and then mixed, and immediately 1 mL was transferred to one tube containing the K 2 Cr 2 O 7 /acetic acid solution (t 0min ). The remaining 2 mL was incubated on a shaker for 10 min, and then the previous step was repeated (t 10min ). Finally, all tubes were boiled in a water bath and cooled down to room temperature, and then absorbance was measured at λ = 570 nm and the instrument was blanked with the blank solution. The difference in absorbance between t 0min and t 10min was calculated, and all samples were related to the negative control.
MTT Assay
The effect of the pentathiepins on the viability of the cell lines was assessed via an MTT assay, performed as previously described [37]. Healthy and viable cells convert the yellow soluble 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide to the insoluble purple corresponding formazan, which can be spectrophotometrically measured [32]. For this assay, 5000 cells (1 250 for LCLC-103H) in 0.1 mL of medium were seeded per well of a 96-well plate and incubated at standard conditions for 24 h prior to treatment. A serial dilution of the test compounds was prepared in medium, covering final concentrations from 10.0 to 0.04 µM, and 0.1 mL of the working solution was added to each well in triplicate. After 48 h, 20 µL of a 2.5 mg/mL MTT in PBS solution was pipetted into each well and further incubated for 4 h at standard conditions. Thereafter, medium was replaced with 50 µL DMSO, and the plates were put on a shaker for 5 min and subsequently read at λ = 570 nm, with a SpectraMax 384plus (Molecular Devices, Sunnyvale, CA, USA). The cell viabilities of the treatment conditions were calculated in relation to a solvent control sample. The absolute IC 50 was calculated via nonlinear regression followed by interpolation of the dose-response graphs in Prism 7 (GraphPad, San Diego, CA, USA). The IC 90 was derived from the IC 50 by using the slope factor of the dose-response graph with the tool QuickCalcs from GraphPad (GraphPad Software, San Diego, CA, USA). All determined IC 50 and all applied IC 90 values are listed in Tables S3 and S4. 3.2.6. Adapted MTT Assay to Assess the Influence of Ferroptosis This experiment was performed similarly to that described in the previous section. To assess the importance of ferroptosis for cell death, a co-incubation of the cells with a serial dilution of the pentathiepins and ferrostatin-1 (Fer-1) as a known inhibitor of ferroptosis was performed [15,44,45]. As negative control, the cells were incubated with the serial dilution of the test compounds and an amount of DMSO corresponding to the Fer-1 treatment solution. If ferroptosis were to be involved in the cytotoxicity of the compounds, the addition of Fer-1 should protect cells from the treatment with pentathiepin. The cells were seeded in 100 µL medium per well but then treated with 50 µL of pentathiepin dilution and 50 µL of a Fer-1 or corresponding DMF working solution, which resulted in final concentrations of pentathiepins ranging from 10 to 0.04 µM and Fer-1 of either 1.5 or 6.0 µM or DMF only, respectively. The IC 50 values were calculated as described above, and Fer-1-treated cells were set in relation to those without the additive.
Crystal Violet Proliferation Assay
The antiproliferative effect of the pentathiepins on human cancer cell lines was analyzed by crystal violet assay based on the protocol of a previous publication [37]. The dye binds unspecifically to DNA and proteins, hence staining cells that remain after treatment. After dissolving in ethanol, the optical density was measured. Here, 1000 cells (250 for LCLC-103H) in 0.1 mL of medium were seeded per well of a 96 well plate and incubated at standard conditions for 24 h prior to treatment to allow for adhesion of the cells. A serial dilution of the test compounds was prepared in medium, covering final concentrations from 10.0 to 0.04 µM and 0.1 mL of the working solution added to each well in triplicate. Additionally, samples for each cell line were seeded and directly fixed after 24 h to measure the cell density at the beginning of treatment (t 0 ). Fixation was performed by replacing medium with 0.1 mL of a solution containing 1% glutaraldehyde in Dulbecco's buffer (0.2 g/L KCl, 0.1 g/L MgSO 4 ·7 H 2 O, 1.55 g/L Na 2 HPO 4 ·7 H 2 O, 0.2 g/L KH 2 HPO 4 , and 8 g/L NaCl in water) for 20 min. Plates were stored in 0.1 mL of Dulbecco's buffer until further processing. After 96 h of treatment, the same fixation process was applied to the plates containing cells and test compounds. To stain the cells, 0.1 mL of an aqueous solution of crystal violet (0.02%) was added per well for 20 min, then discarded and the plates afterwards rinsed in clear water for 30 min. The dye was extracted in 50 µL of ethanol per well during 2 h of shaking. The plates were read at λ = 570 nm using a SpectraMax 384plus (Molecular Devices, Sunnyvale, CA, USA), and the proliferation of the treatment conditions was calculated in relation to a solvent control sample after subtracting the background value of the T 0 -plates. The GI 50 (half-maximal growth inhibitory concentration) was calculated via nonlinear regression followed by interpolation of the dose-response graphs in Prism 7 (GraphPad, San Diego, CA, USA).
Determination of Growth Rates and Doubling Times
The crystal violet assay was also used to determine the doubling times of the cell lines. For this purpose, 1000 cells were seeded in 0.1 mL per well of a 96-well plate and incubated for 24, 48, 72, and 96 h (ln linear growth phase) under normoxic (19% atmospheric oxygen) and hypoxic (1% atmospheric oxygen) conditions. Fixation, staining, and measurement were conducted as described in the previous paragraph. The growth rates and doubling times were calculated via the following equations: growth rate (gr) = (ln(N (t) /N (0) ))/t (1) doubling time = (ln(2))/gr (2) (N (t) = OD at time t; N (0) = OD at time 0; t = time in h).
DCFDA-Based Flow Cytometric Assay for Detection of Reactive Oxygen Species
In order to quantify intracellular ROS, we performed a flow cytometric assay based on the ROS-sensor 2 ,7 -dichlorofluorescein diacetate (DCFDA) [52,53]. For acute effects of pentathiepins on intracellular ROS levels, the cells were stained with DCFDA first and subsequently incubated with 25 µM of compound for 10 min. For monitoring a treatment of 24 or 48 h, the cells were incubated with the respective pentathiepin (IC 90 , see Table S4) and loaded with the DCFDA dye prior to analysis. Per condition, 250,000 cells were seeded in 2 mL of medium per well of a 6-well plate and allowed to adhere overnight. For measurement of acute ROS, the cells were stained with 1 mL of a 20 µM DCFDA-PBS solution for 30 min under standard incubation conditions. Afterwards, the solution was discarded, and the monolayer washed twice with 1 mL of PBS before 3 mL of medium containing 25 µM of pentathiepin was added for 10 min at 37 • C. After removing the solution and washing again with PBS, the cells were detached by using 0.5 mL of a diluted trypsin/EDTA-PBS solution (25%) and harvested with 1 mL of culture medium by carefully rinsing the wells. The cell suspension was transferred to 1.5 mL tubes, centrifuged for 5 min at 500× g, and washed with 1 mL of PBS. After removing the supernatant, each pellet was resuspended in 0.5 mL of PBS for measurement via the MACSquant Analyzer 10 at λ Ex / Em = 488/530 nm. Relative ROS levels resulting from treatment with pentathiepin were calculated by relating the fluorescence intensity of treated samples to the negative control. For the experiments covering periods of 24 and 48 h, the cells were incubated with the compounds first and stained with DCFDA afterwards. To assess the influence of additional GSH, we included 3 or 30 µM GSH to the medium containing 25 µM of pentathiepin, and the protocol was followed as described.
Apoptosis Assay Based on Annexin V-FITC and PI
The annexin V-FITC/PI assay, which discriminates between viable, early, and late apoptotic cells within a sample population, was performed according to the manufacturer's instructions (Miltenyi Biotec, Teterow, Germany). Briefly, 125,000 cells were seeded in 2 mL per well of a 6-well plate, allowed to adhere overnight, and subsequently treated with the respective pentathiepin (IC 90 , see Table S4) diluted in 3 mL of medium. After incubation periods of 6, 24, or 48 h, the cells were harvested, washed with the provided binding buffer, and stained with the annexin V-FITC reagent for 20 min at room temperature (RT) in the dark. After another washing step, the pellet was resuspended, and PI was added immediately before flow cytometric measurement at λ Ex / Em = 488/525 ± 50 nm for FITC and λ Ex / Em = 488/655-730 nm for PI with the MACSquant Analyzer 10 (Miltenyi Biotec, Teterow, Germany).
Cell Cycle Analysis Based on PI Staining
For this assay, cells were seeded and treated as for the apoptosis assay, i.e., 125,000 cells per well were incubated with the pentathiepins (IC 90 , see Table S4) for 24 or 48 h. Cells were harvested by trypsinization, collected in 1.5 mL tubes, and washed twice with PBS by centrifuging for 5 min at 500× g and discarding the supernatant after every step. For fixation, 500 µL of ice-cold ethanol (70%) was added dropwise under vortexing with a subsequent incubation on ice for 30 min. Then, the cells were centrifuged for 10 min at 4000 rpm and 4 • C, the supernatant was removed, and the cells were resuspended in 500 µL of PBS containing 25 µg/mL of PI and 100 µg/mL of RNase. The staining solution was incubated for 30 min, and the cell suspension was measured at λ Ex / Em = 488/655-730 nm using a MACSquant Analyzer 10 (Miltenyi Biotec, Teterow, Germany).
Western Blot Analysis
All Western blot analyses were performed with reagents, material, and devices purchased from Bio-Rad (Feldkirchen, Germany) if not stated otherwise. To obtain protein lysates, we seeded cells and treated them as described for the other methods. Briefly, 125,000 cells were seeded per well of a 6-well plate, incubated to adhere overnight, and then treated with pentathiepins (IC 90 , see Table S4) for 24 h. Cells were harvested by trypsinization and centrifugation for 5 min at 500× g. After a washing step with PBS, the cell pellet was resuspended in lysis buffer (Tris 50 mM (pH 7.4), 100 mM NaCl, 100 mM NaF, 5 mM EDTA, 0.2 mM Na 3 VO 4 , 0.1% Triton-X, and 1% protease inhibitor cocktail added immediately before use). Lysis was supported by incubating the samples for 10 min in an ultrasonic bath before centrifuging for 10 min at 18,000 × g and 4 • C. Protein quantification was performed via the Bradford method using ROTI ® nanoquant reagent (Carl Roth GmbH, Karlsruhe, Germany) and bovine serum albumin as calibration standard. For electrophoretic separation, 15-40 µg of protein was loaded per well of a polyacrylamide gel (Mini-or Midi-Protean ® TGX stain-free gel, Bio-Rad, Feldkirchen, Germany), and subsequent blotting was performed with Trans-Blot ® Turbo™TransferPack Mini or Midi PVDF membranes from Bio-Rad (Feldkirchen, Germany). The membranes were blocked with a 10% non-fat milk powder solution in Tris-buffered saline (TBS; 2.42 g/L Tris, 8.48 g/L NaCl in water) plus 0.5% Tween-20 (TBST) for 1 h at RT. Incubation with primary antibodies took place over night at 4 • C in a 1:1000 dilution in TBST containing BSA (1%). The secondary antibodies conjugated to horse-radish peroxidase were diluted 1:10,000 in TBST and 1% BSA, and incubated for 4 h at RT. A detection step followed, applying Clarity™Western ECL substrate with subsequent imaging via the INTAS Advanced Fluorescence Imager. Between all incubation steps, the blots were washed three times for 10 min with TBST. The bands were quantified with the ImageLab software and normalized using the total protein quantification of the TGX stain-free gel system from Bio-Rad [38][39][40].
Plasmid Cleavage Assay
This assay was used to assess the cleaving ability of the pentathiepins in a cellfree setting and on the basis of procedures described in [54]. Plasmids occur in three different conformations with distinct electrophoretic mobilities that can finally be separated visualized via agarose gel electrophoresis. For each condition, 0.3 µg of pBR322 plasmid DNA was incubated with either the pentathiepin or a corresponding amount of the solvent acetonitrile. Incubations were performed in sodium phosphate buffer (50 mM) with or without GSH for 20 h at 37 • C in a water bath. Afterwards, the samples were separated by using an agarose gel (1%) run at 80 V for 2 h. Resulting bands were stained with GelRed ® (Sigma-Aldrich, Taufkirchen, Germany),) and imaged and quantified via Gel-Doc EZ Imager and Image Lab (Bio-Rad, Feldkirchen, Germany), respectively. To assess the cleavage ability of the pentathiepins, we incubated the plasmid with either 5 or 25 µM of compound at a fixed concentration of GSH (2 mM). In another experiment, the amount of GSH varied between 1 µM and 10 mM, while the pentathiepin concentration was 5 µM. In contrast to the aforementioned settings where a buffer pH of 7.1 was applied, in a third attempt, the influence of different pH on the cleavage outcome was analyzed by adjusting the buffer pH to 5.1, 6.1, or 8.1 while using fixed concentrations of both test compound and GSH. Another attempt to gain insights into the cleaving mechanism was made by adding either catalase, superoxide dismutase (100 µg/mL each), or both to 5 µM pentathiepin with 2 mM GSH.
Alkaline Comet Assay
With this assay, the intensity of DNA damage can be assessed on the level of a single cell, in case of the alkaline version detecting both single and double strand breaks as well as alkali-labile sites. The method presented here is an adaptation of the protocol published by Olive and Banath [55] and based on the distinct electrophoretic mobility of damaged and intact genomic DNA. The latter has a low mobility; does thus not migrate in the electric field; and remains in a nucleoid shape after lysis, referred to as the comet head. In contrast, when strand breaks are created, free charged ends emerge and increase the electrophoretic mobility, prompting DNA segments to drift away from the comet head, thereby forming the so-called comet tail. As descriptor of still intact genomic DNA, the percentage of DNA in the comet head was selected. Per condition, 50,000 cells of either Siso, HAP-1, or HAP-1.KO.GPx1 were diluted in 1 mL of PBS and subsequently treated with either a 1% DMF aqueous solution as negative control, 20 µM of H 2 O 2 as positive control, or 25 µM of pentathiepin for 15 min on ice to impair intracellular DNA repair mechanisms. Afterwards, 400 µL of the cell suspension was blended with 1.2 mL of a 1% low melting point agarose solution (40 • C) and smoothly distributed on agarose-precoated glass slides. Polymerization was finished after 3 min at RT, and the slides were horizontally submerged in alkaline lysis buffer (1.2 M NaCl, 0.1% N-lauryl-sarcosinate, 0.1 M Na-EDTA, 0.26 M NaOH) before storing them at 4 • C overnight. The slides were then rinsed three times in rinse/electrophoresis buffer (0.002 M Na-EDTA, 0.03 M NaOH) before electrophoretic separation was performed for 25 min at 0.6 V/cm. A neutralization step by adding distilled water was followed by staining the slides with 250 µL of an aqueous 10 µg/mL PI solution for 20 min. For visualization and image capturing, a DMi8 fluorescent microscope and LASX software (Leica Microsystems, Wetzlar, Germany) were used, and the comets scored with the help of CometScore2.0 (rexhoover.com). Per condition and replicate, at least 100 comets were analyzed to calculate a mean level of damage.
Fluorescence Microscopy
For visualization of the putative trackable pentathiepin 1, a DMi8 (Leica Microsystems, Wetzlar, Germany) and the corresponding LASX software were used. Siso cells were seeded on cover slips at a density of 125,000 cells in 2 mL per well of a 6-well plate, incubated overnight for adherence, and then treated with compound 1 (IC 90 , see Table S4) or solvent. After 24 h, the medium was removed, and the cells were washed once with PBS and placed on microscopic glass slides with mounting medium. The cover slips were fixed with sealant, immersion oil was applied, and the cells were subsequently visualized. The pentathiepin distribution was imaged with the 63× magnification objective in the DAPI channel (λ ex = 325-407 nm, λ em = 461 nm) and the autofluorescence of the cells in the FITC channel (λ ex = 488 nm, λ em = 525 nm).
Luminescent Caspase Activity Assay
To study the activation of the effector caspases -3 and -7, we performed a luminescencebased assay according to the manufacturer's instructions (Caspase-Glo, Promega, Walldorf, Germany). Briefly, 5000 cells were seeded in 0.1 mL per well of a 96-well plate and allowed to adhere overnight. The cells were treated with either 0.2% DMF, 0.5 µM of doxorubicin, or the respective pentathiepin (IC 90 , see Table S4) and incubated for 24 h at standard conditions. Afterwards, the standard protocol was followed, including a 30 min incubation time before measurement of luminescent signals by using the Spectramax i3x (Molecular Devices, Sunnyvale, CA, USA). After subtraction of the background luminescence the positive control and treatment conditions were related to the negative control.
Statistic Evaluation and Correlation Analysis
Data and diagrams were prepared with Prism 7 (Version 7.0) and 9 (Version 9.1) from GraphPad Software Inc. (San Diego, CA, USA) and presented as means with standard deviations (SD) of at least three independent experiments if not stated otherwise. For the display of dose-response graphs in the GPx1 assay, the 95% confidence interval was included instead of the SD. Assessment of statistical significance was performed via ANOVA (analysis of variation) coupled with a Dunnett's or Tukey's multiple comparisons test and significance levels expressed as * p < 0.05, ** p < 0.01, *** p < 0.001, or **** p < 0.0001.
To analyze a putative correlation between potency and cell doubling times, we used the Pearson correlation calculation of Prism 7 or 9 (GraphPad Software, San Diego, CA, USA). It was assessed to what extent two variables vary together, in this case X was either the IC 50 from MTT or GI 50 from crystal violet assay and Y was the doubling time of the respective cell line. Results are output as correlation efficient r with the corresponding pvalue as level of significance. For interpretation of the r-values, we applied a common scale: 0.0-0.1 negligible, >0.1-<0.4 weak, 0.4-<0.7 moderate, 0.7-<0.9 moderate, >0.9 strong [36].
Conclusions
Pentathiepins represent a class of sulfur-containing small molecules that induce a broad range of biological effects. Recently, potent and specific inhibition of the glutathione peroxidase 1, induction of reactive oxygen species, apoptosis, and the depolarization of the mitochondrial membrane have been found to be induced by such compounds. Such biological features render pentathiepins a promising class of drugs for the application as anticancer agents. However, a comprehensive study with a range of biological assays in different human cancer cell lines is lacking. Hence, the aim of the present work was to provide a comprehensive insight into the in vitro effects of pentathiepins in various cancer cell lines, as well as to identify possible structure-activity relationships.
Six new pentathiepins were synthesized that possess strong cytotoxicity in a panel of 14 human cancer cell lines with IC 50 values in the low micromolar range. Hypoxia decreased these effects, except for pentathiepin 1, indicating that the availability of oxygen is necessary for pentathiepins 2-6 to unfold their activity.
The cytotoxicity of the compounds is believed to result chiefly from ROS-mediated cleavage of DNA in the presence of GSH. Here, a connection between the induction of ROS and the ability to cleave DNA was found. Pentathiepins 2, 3, 4, and 5 caused intracellular oxidative stress and at the same time the highest rates of DNA damage. Cell cycle arrest in the G 2 /M phase for these four compounds mirrors what is known for other DNA damaging anticancer agents, such as bleomycin [46]. On the other hand, for pentathiepin 1, we detected a lower potential to cleave DNA but also observed by fluorescence microscopy that this compound enters cells but not their nuclei.
With regards to the mode of cell death, strong evidence for the induction of apoptosis was found, but not for all endpoints in all cell lines. The externalization of phosphatidylserine was accompanied by the detection of cleaved PARP1 and the activation of caspase-3 and -7 in the two HAP-1 leukemia cell lines, but these attributes were less apparent in the A2780 and Siso solid tumor cell lines.
With regards to the previously discovered ability of pentathiepins to inhibit the GPx1, we confirmed this for the isolated enzyme from bovine erythrocytes with the new compounds 1-5. However, evidence to date indicates that GPx1 inhibition is not required for the cytotoxic effects of pentathiepins. We base this conclusion on the observation that compound 6 is cytotoxic without being a potent GPx1 inhibitor. Moreover, no significant differences were detected between the GPx1 knockout variant and the parental cell line HAP-1 throughout the majority of biological assays performed herein.
While it is too early to state that pentathiepins have therapeutic potential as anticancer drugs, progress has been made to chemically optimize their structures towards biological activity. Finally, the finding that the six pentathiepins show structure-dependent biological activity offers the possibility to modulate biological effects by fusion of specific scaffolds to the pentathiepin ring. | 2021-07-25T06:17:02.514Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "cce835c6846e142383694ac9309e407495e51c30",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/14/7631/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d15be07938859e9aa663f56ae70fc04b743cbc9",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256658884 | pes2o/s2orc | v3-fos-license | A key to genera of Dikraneurini from China, with description of a new species of Cornicola Ohara & Hayashi (Hemiptera, Cicadellidae, Typhlocybinae)
Abstract The leafhopper genus Cornicola Ohara & Hayashi, previously recorded from Japan, is recorded from China for the first time and a new species, C.maculatus Xu, Dietrich & Qin, sp. nov., is described and illustrated, including its color polymorphism. This genus has male genitalia and hind wing venation similar to those found in Empoascini but it is more appropriately placed in Dikraneurini. A key to species of Cornicola is given together with a key to the genera of Dikraneurini from China.
Introduction
The tribe Dikraneurini is a diverse group and differs from other Typylocybinae leafhoppers in lacking an appendix in the forewing and in usually having the hind wing submarginal vein complete and extended past vein RA or RP basad along the costal margin (Dietrich 2005). However, some genera included in this tribe either lack the hind wing submarginal vein (Typhlocybella Baker) or have this vein reduced or obsolete at the apex of the costal margin and thus resemble species of Empoascini (Viraktamath and Dietrich 2011;Dietrich 2013;Ohara and Hayashi 2022). One such genus in the latter category is Cornicola Ohara & Hayashi, 2022, with C. mizuki Ohara & Hyashi, from Japan, as its type species. In this paper, a second species of Cornicola is described as new from southwest China, together with a key to Chinese Dikraneurini genera. To date, Dikraneurini contain 74 genera and 497 valid species distributed throughout the world (Dmitriev et al. 2022) of which 25 genera and more than 60 species occur in China and have been studied by Matsumura (1931), Anufriev and Emeljanov (1988), Dworakowska (1972Dworakowska ( , 1979Dworakowska ( , 1993a, Chou and Ma (1981), Zhang and Chou (1988), Zhang (1990), Zhang and Kang (2007), Zhang (2012, 2013), Yang et al., (2012), , Yang (2015, 2020), Huang et al. (2018), Kang et al. (2018), Qin et al. (2020).
Materials and methods
The specimens examined in this study were preserved in 95% ethanol stored for three years resulting in loss of the original color; they are now deposited in the insect collection of Illinois Natural History Survey, Champaign, Illinois (INHS). Morphological terminology used in this work follows Xu et al. (2021). Diagnosis. Cornicola is easily distinguishable from all other known Typhlocybinae in having the following combination of characters: (1) crown of head much narrower than pronotum and strongly elevated above anterior margin of pronotum (Figs 3, 6);
Notes. Ohara and Hayashi (2022) recognized that Cornicola is related to Igutettix Matsumura, 1932 and therefore placed the genus in Dikraneurini; and also compared the genus to Vilbasteana Anufriev, 1970, Koreoneura Hossain & Kwon, 2021and Sweta Viraktamath & Dietrich, 2011. However, the hind wing venation of Cornicola differs from the above-mentioned genera and instead resembles that of the Southeast Asian dikraneurine genera Rakta Dietrich, 2013 andAlbodikra Dietrich, 2013 in having the submarginal vein obsolete or reduced apically along the costal margin of the hind wing ( Fig. 10; fig. 2b, d in Dietrich 2013) and thus resembling that of Empoascini. Cornicola differs from these two genera in having an anteclypeus only slightly convex in both sexes (Figs 5,8) (strongly swollen and broad in males of Rakta and Albodikra). Despite a strong resemblance of the hind wing venation of the new genus to the common pattern in Empoascini and some additional similarities in the male genitalia (e.g., elongate style), Cornicola is clearly more closely related to Dikraneurini and may represent a transitional form between Dikraneurini and Empoascini.
Key to species of
Basal sternal abdominal apodemes parallel sided, reaching end of segment IV (Fig. 15). Male pygofer almost triangular in lateral view, dorsal margin with fingerlike process arising near distal third of dorsal margin and extended posterad, not reaching apex; distal lobe bearing 6 or 7 microsetae, ventral margin with 8 or 9 feeble microsetae, dorsal bridge occupying more than one-third length of pygofer (Figs 16,17). Anal tube gradually narrowed apically (Fig. 18). Subgenital plate longer than pygofer lobe in lateral view, broad basally, fused in basal two-thirds, tapered distally, apex rounded and strongly narrowing, with sparse scattered microsetae, 6-8 macrosetae arranged in single row along each dorsolateral margin near midlength (Fig. 19). Connective widest medially with subapical angular projection in lateral view, apical margin emarginate medially (Figs 20, 21). Style apodeme much shorter than apophysis, preapical lobe absent, without conspicuous setae, slightly broadened preapically, apex smooth, slightly broadened then tapered to hooklike tip, curved laterad (Fig. 22). Aedeagus with shaft broad at base, narrowed near middle and with broad dorsal distal lobe in lateral view; pair of slender distal processes extended laterad from adjacent gonopore, each with short dorsomedially directed spine and elbow-like bend near midlength with distal part curved dorsomesad in posterior view and anterodorsad in lateral view (Figs 23, 24).
Notes. This new species differs from Cornicola mizuki by the characters noted in the key.
Distribution. China (Chongqing). Etymology. The species name is derived from the Latin words 'maculatus', referring to the black spots on the crown and thorax. | 2023-02-08T16:20:08.926Z | 2023-02-06T00:00:00.000 | {
"year": 2023,
"sha1": "79ed1abad0a498bb1345274ab4010463bfd2f5d0",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/94800/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c374d02ac4222aac1f2bb4398ac57fcc1d041eb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
73457623 | pes2o/s2orc | v3-fos-license | Romiplostim in adult patients with newly diagnosed or persistent immune thrombocytopenia (ITP) for up to 1 year and in those with chronic ITP for more than 1 year: a subgroup analysis of integrated data from completed romiplostim studies
Summary The thrombopoietin receptor agonist romiplostim is approved for second‐line use in chronic immune thrombocytopenia (ITP), but its effects in patients with ITP for ≤1 year are not well characterized. This analysis of pooled data from 9 studies included patients with ITP for ≤1 year (n = 311) or >1 year (n = 726) who failed first‐line treatments and received romiplostim, placebo or standard of care. In subgroup analysis by ITP duration, patient incidences for platelet response at ≥75% of measurements were higher for romiplostim [ITP ≤1 year: 74% (204/277); ITP >1 year: 71% (450/634)] than for placebo/standard of care [ITP ≤1 year: 18% (6/34); ITP >1 year: 9% (8/92)]. Of patients with ≥9 months on study, 16% with ITP ≤1 year and 6% with ITP >1 year discontinued romiplostim and maintained platelet counts ≥50 × 109/l for ≥6 months without ITP treatment (treatment‐free remission). Independent of ITP duration, rates of serious adverse events and bleeding were lower with romiplostim than placebo/standard of care and thrombotic events occurred at similar rates. In this analysis, romiplostim and placebo/standard of care had similar safety profiles and romiplostim increased platelet counts in patients with either ITP ≤1 year or ITP >1 year, with more treatment‐free remission in those with ITP ≤1 year.
Summary
The thrombopoietin receptor agonist romiplostim is approved for secondline use in chronic immune thrombocytopenia (ITP), but its effects in patients with ITP for ≤1 year are not well characterized. This analysis of pooled data from 9 studies included patients with ITP for ≤1 year (n = 311) or >1 year (n = 726) who failed first-line treatments and received romiplostim, placebo or standard of care. In subgroup analysis by ITP duration, patient incidences for platelet response at ≥75% of measurements were higher for romiplostim [ITP ≤1 year: 74% (204/277); ITP >1 year: 71% (450/634)] than for placebo/standard of care [ITP ≤1 year: 18% (6/34); ITP >1 year: 9% (8/92)]. Of patients with ≥9 months on study, 16% with ITP ≤1 year and 6% with ITP >1 year discontinued romiplostim and maintained platelet counts ≥50 9 10 9 /l for ≥6 months without ITP treatment (treatment-free remission). Independent of ITP duration, rates of serious adverse events and bleeding were lower with romiplostim than placebo/standard of care and thrombotic events occurred at similar rates. In this analysis, romiplostim and placebo/standard of care had similar safety profiles and romiplostim increased platelet counts in patients with either ITP ≤1 year or ITP >1 year, with more treatment-free remission in those with ITP ≤1 year.
Immune thrombocytopenia (ITP) is an autoimmune disease characterized by platelet counts <100 9 10 9 /l in the absence of other blood count abnormalities (Rodeghiero et al, 2009). This disorder is associated not only with an increased risk of bleeding but also with a mildly increased risk of thrombosis (Sarpatwari et al, 2010;Rodeghiero, 2016Rodeghiero, , 2017, which can be further increased with splenectomy (Boyle et al, 2013;Ruggeri et al, 2014;Doobaree et al, 2016;Rodeghiero, 2018). The thrombocytopenia in ITP has long been attributed to increased destruction of opsonized platelets by the spleen (Harrington et al, 1951), but is now understood to also be a problem of inadequate platelet production, with both cellular and antibody-mediated immune mechanisms inhibiting platelet production in the bone marrow (Gernsheimer et al, 1989;Olsson et al, 2003;McMillan et al, 2004;Stasi et al, 2008).
The clinical course of an adult ITP patient may be assessed during three separate phases (Rodeghiero et al, 2009). The first is the newly diagnosed phase (<3 months) in which the patient initially presents with thrombocytopenia and is assessed for the need for acute treatment depending on the severity of bleeding. For many, the thrombocytopenia continues despite initial therapy and enters a persistent phase (3-12 months). Finally, ITP is considered chronic when it lasts for >12 months (Rodeghiero et al, 2009). It is not established whether these three chronological stages reflect any differences in pathophysiology, treatment response or likelihood of remission.
There is suggestive evidence that disease progression may occur over 1 year due to "epitope spreading" (McMillan, 2007), and there remains discussion as to whether more aggressive earlier therapy might mitigate the conversion to chronic disease (Zaja et al, 2010).
The initial therapy for symptomatic, newly diagnosed ITP has long been corticosteroids supplemented with intravenous gamma globulin (IVIg) for those patients with more serious bleeding (Neunert et al, 2011). Up to 80% of patients respond with a rise in platelet counts, but the majority relapse upon corticosteroid discontinuation or dose reduction, necessitating the need for other therapies. Subsequent second-line treatments are either surgical (e.g. splenectomy) or medical [e.g. thrombopoietin (TPO) receptor agonists, rituximab, azathioprine, mycophenolate mofetil, ciclosporin or danazol]. Regarding newer therapies for ITP, the SYK inhibitor, fostamatinib, has been approved by the United States Food and Drug Administration for use in chronic ITP Newland et al, 2018; https://tavalisse.c om/downloads/pdf/Tavalisse-Full-Prescribing-Information.pdf), while other TPO receptor agonists, such as avatrombopag (Bussel et al, 2014) and lusutrombopag (Katsube et al, 2016), and agents with other mechanisms of action are under development.
TPO receptor agonists increase platelet production and have demonstrated marked benefit in treating patients with chronic ITP. Extensive studies with romiplostim and eltrombopag in chronic ITP patients have shown a response rate over 60%, long-term efficacy, reduced bleeding, minimal side effects, improved quality of life and reduced the need for rescue therapy, and have allowed most patients to discontinue other forms of ITP treatment, such as corticosteroids (https://www.pi.amgen.com/~/media/amgen/repositorysites/piamgen-com/nplate/nplate_pi_hcp_english.pdf; https://www. pharma.us.novartis.com/sites/www.pharma.us.novartis.com/file s/promacta.pdf). TPO receptor agonists are currently approved by regulatory agencies in the United States and elsewhere only for second-line treatment of patients with chronic ITP not responsive to corticosteroids, immunoglobulins or splenectomy.
While most attention has been focused on the use of TPO receptor agonists in patients with chronic ITP (currently defined as >1 year, previously ≥6 months), it remains to be clarified whether similar treatment benefits will occur in patients with ITP for ≤1 year, including newly diagnosed or persistent ITP. No prospective, randomized, placebo-controlled study has examined TPO receptor agonist therapy specifically in patients with ITP ≤1 year, but these patients were allowed to enrol in a number of clinical studies. In this retrospective analysis, we examined the efficacy and safety of romiplostim treatment in adult patients with ITP ≤1 year, compared with those with ITP >1 year, who had failed firstline treatments and subsequently received romiplostim, placebo or standard of care in romiplostim ITP studies. Specifically, we assessed the effect of romiplostim in each ITP duration subgroup for platelet response, bleeding and adverse events, including thrombosis.
Study design and patients
Data were integrated from nine romiplostim studies conducted from 2002 to 2014 that included patients with ITP ≤1 year (Table SI). Three were placebo-controlled studies (Kuter et al, 2008;Shirasugi et al, 2011), one had a standard-of-care control group (Kuter et al, 2010), and three had no control group (Janssens et al, 2015(Janssens et al, , 2016Newland et al, 2016). After completing the parent studies, many patients had the option to enter one of two extension studies (Shirasugi et al, 2012;Kuter et al, 2013). One of the single-arm parent studies enrolled only patients with ITP ≤1 year ; the other studies enrolled patients with either ITP ≤1 year or ITP >1 year. Four dose-finding studies of romiplostim (Bussel et al, 2006;Newland et al, 2006;Shirasugi et al, 2009) were not included in this analysis.
Outcomes
Platelet response was defined as a platelet count ≥50 9 10 9 /l, excluding platelet counts obtained in the 8 weeks after rescue medication use. Durable platelet response was defined as a platelet response for ≥6 weeks during weeks 17-24 (to allow time for dose titration and effects on thrombopoiesis to be captured), including the studies with a duration of >24 weeks (Kuter et al, 2010;Janssens et al, 2015;Newland et al, 2016). For romiplostim-treated patients, rescue medications were added per investigator to treat or prevent bleeding and could include newly introduced ITP medications and dose or frequency increase of baseline ITP medications other than romiplostim. Most of the patients in the control group (27/34) received standard of care in a study that did not report those types of changes as rescue medications because they were considered part of the standard of care (Kuter et al, 2010).
By definition, serious adverse events could include adverse events of any grade that required or prolonged hospitalization, including nonspecific conditions, such as bleeding (e.g. contusion, purpura) as well as general symptoms (e.g. dehydration, pyrexia), whereas grade ≥4 adverse events were life-threatening conditions, such as cancer (e.g. liver, rectal) or organ failure (e.g. cardio-respiratory arrest, respiratory failure).
Statistical analyses
Results were summarized by ITP duration subgroups of ITP ≤1 year (demographics and baseline disease characteristics were further summarized by ITP duration of <3 months or 3-12 months) or ITP >1 year. For categorical variables, the number and percentage of subjects in each category were summarized. Continuous variables were summarized by Q1 (25th percentile), median, and Q3 (75th percentile). Safety data were adjusted for exposure to reflect the longer exposure to romiplostim (e.g. in extension studies). Durationadjusted event rates were obtained by using the total number of events divided by the total patient-years on study.
Data for placebo and standard of care were pooled because, for those with ITP ≤1 year, there were too few patients in each category (7 placebo and 27 standard of care) to analyse separately. For patients who received placebo/standard of care in the parent study and romiplostim in an extension study, only placebo/standard of care data from the parent study were used. For patients who received romiplostim in both the parent and extension studies, data from the extension study were also included (Shirasugi et al, 2012;Kuter et al, 2013). Exact binomial confidence intervals (CIs) were used to calculate 95% CIs of incidence rate. For the thrombotic event by platelet count analysis, if a given patient had multiple thrombotic events at different platelet counts, then that patient could be counted in multiple platelet count categories. Due to the post-hoc nature of these analyses, P values were not provided. Instead, the 95% CIs on event rates were used to compare ITP subgroups.
Patient demographics, characteristics, and disposition
This integrated analysis included 1037 patients from nine clinical studies (Table SI) (Janssens et al, 2015), a 3-year bone marrow study (n = 169) (Janssens et al, 2016), and a platelet response and remission study (n = 75) . Patients with ITP ≤1 year were primarily from Europe (n = 191) or North America (n = 93).
Of the 311 patients with ITP ≤1 year, 155 (50%) had newly diagnosed ITP (<3 months) and 156 (50%) had persistent ITP (3-12 months) (Table I). In none of these studies was romiplostim used as an initial or rescue therapy. Most patients were Caucasian, with median age in the 50s. The median duration of ITP was 3 months for those with ITP ≤1 year and 72 months (6 years) for those with ITP >1 year. Patients with ITP ≤1 year were less likely to have prior splenectomy (8% vs. 44%) or rituximab use (7% vs. 18%) than the patients with ITP >1 year. Median baseline platelet counts were 18 9 10 9 /l in both ITP duration subgroups.
Of the 911 patients who received romiplostim in the parent studies, 680 (75%) completed those studies, with withdrawal of consent being the most common reason for discontinuing (Fig 1). Of the 223 patients who had the option to enter extension studies and chose to do so, 160 (72%) completed those extension studies.
Efficacy: platelet response
The romiplostim group included 277 patients with ITP ≤1 year and 634 with ITP >1 year (Fig 1). The placebo/ standard of care group included 34 patients with ITP ≤1 year and 92 with ITP >1 year. Platelet counts rose in most patients who received romiplostim and remained stably elevated (Fig 2A). The ITP duration subgroups had (Fig 2A-B). The median time to first platelet response for romiplostim-treated patients was 2 weeks in each ITP duration subgroup. For placebo/standard of care, the median time to first response was 4 weeks for patients with ITP ≤1 year and 12 weeks for those with ITP >1 year, but the 95% CIs overlapped. For patients with ITP ≤1 year, platelet response rates were 86% for romiplostim and 62% for placebo/standard of care; for patients with ITP >1 year, platelet response rates were 87% for romiplostim and 33% for placebo/standard of care (Table II). Response rates were notably higher for romiplostim than for placebo/standard of care for more stringent measures such as responding ≥75% or ≥90% of the time or having a durable platelet response ( Fig 2B; Table II).
In the examination of how many patients with ≥9 months on study were able to discontinue romiplostim and maintain platelet counts ≥50 9 10 9 /l without any ITP treatments for ≥6 months (i.e. remission), the rate was greater for those with ITP ≤1 year (16%; 95% CI: 11-21%) than for those with ITP >1 year (6%; 95% CI: 4-8%) (Table II). Nine months on study (not necessarily 9 months of exposure) was chosen as an appropriate period to assess for these treatment-free periods as it allowed sufficient time to escalate to a stable romiplostim dose and for the effects of romiplostim to be observed (i.e. a few months titrating the dose and ≥6 months off romiplostim).
Efficacy: rescue medication use
For romiplostim-treated patients, rescue medications were used in 44% of those with ITP ≤1 year and 50% of those with ITP >1 year (Table II). Use of rescue medication decreased over time irrespective of ITP duration, with roughly comparable rates between the ITP duration subgroups, and appeared to be higher in the first few weeks when the romiplostim dose was being titrated and other ITP therapies were being reduced or discontinued (Fig 3A). When examined by type of rescue medication, the use of corticosteroids (~60% of rescue medication use) decreased after the first few months and then fluctuated, dropping by approximately 75% over time (Fig 3B). Use of IVIg (~20% of rescue medication use) decreased somewhat over time ( Fig 3C). The rate of decline of corticosteroid or IVIg use was similar between the two ITP duration subgroups.
Bleeding
Although the patient incidence of bleeding varied modestly between groups (Table II), after adjustment for time on study, exposure-adjusted rates of bleeding in both ITP duration subgroups were lower for romiplostim than placebo/standard of care, with non-overlapping 95% confidence intervals (Table III). In both treatment groups, exposure-adjusted bleeding rates were higher in patients with ITP >1 year than in those with ITP ≤1 year. In the romiplostim group, overall bleeding and grade ≥2 bleeding decreased over time to similar extents in both ITP duration subgroups (Fig 4).
Safety
The median average weekly romiplostim dose was similar for both ITP duration subgroups (Table SII), did not vary much over time, and generally was~3-4 lg/kg per week. Compared with what was already known about the use of romiplostim in chronic ITP (Kuter et al, 2008(Kuter et al, , 2010(Kuter et al, , 2013, the safety profile was as expected (Tables III and SIII-SV). Overall, the safety of romiplostim was comparable for the two ITP duration subgroups and event rates were low for romiplostim relative to placebo/standard of care (Tables III and SIII). Serious adverse event rates were similar between and (v) haemolysis secondary to a refractory urinary tract infection that led to disseminated intravascular coagulation in a 61-year-old woman at study week 5. Bone marrow fibrosis or increased reticulin was reported for 3 patients in the romiplostim group and no patients in the placebo/standard of care group; however, the significance of this is unclear as bone marrow biopsies were not performed in most patients. Of the 3 patients, 1 had presence of collagen and 2 had a 2-grade increase in reticulin (per modified Bauermeister score); all 3 were from the bone marrow study with regularly scheduled bone marrow biopsies (i.e. biopsies were not due to concerns on peripheral smears or other findings) (Janssens et al, 2016).
In the romiplostim group, thrombotic/thromboembolic events occurred across a wide range of platelet counts and were independent of platelet count (Fig 5); there were too few thrombotic events with placebo/standard of care to analyse them by platelet count. Thrombosis rates were similar between the romiplostim and placebo/standard of care groups (Table SIV). There was a numerical increase in the rates of thrombotic/thromboembolic events overall for romiplostim-treated patients with ITP >1 year versus those with ITP ≤1 year (6Á1 vs. 4Á4 per 100 patient-years) but the 95% CIs overlapped. Rates of thrombotic events for romiplostim-treated patients increased with age ( Figure S1), as has been reported previously (Ruggeri et al, 2014). Thrombotic events occurring in ≥5 romiplostim-treated patients were only seen in patients with ITP >1 year and included deep vein thrombosis, pulmonary embolism, myocardial infarction and superficial thrombophlebitis (Table SV). Thrombosis rates were similar between the two ITP duration subgroups. No differences were seen by prior splenectomy status or in arterial versus venous thromboses. While placebo/standard of care data are provided for reference, the small number of patients in this subgroup limits comparison.
Discussion
The results of these analyses demonstrate that romiplostim therapy is as effective in patients with either newly diagnosed or persistent ITP (≤1 year), as it is in those who have already developed chronic ITP (>1 year). With romiplostim treatment, time to platelet response, platelet counts, height of platelet count rise, reduction in use of rescue medications, and reduction in bleeding were all similar between patients with ITP ≤1 year and those with ITP >1 year. Romiplostim appears to work in ITP by increasing the rate of platelet production (Meyer et al, 2012), as supported by studies in which TPO prevented megakaryocyte apoptosis and thereby increased platelet production (Harker et al, 1998). Based on the beneficial effect of romiplostim in patients with ITP (Kuter et al, 2008(Kuter et al, , 2010(Kuter et al, , 2013Shirasugi et al, 2011Shirasugi et al, , 2012Janssens et al, 2015Janssens et al, , 2016Newland et al, 2016), romiplostim is approved by many regulatory agencies for the treatment of chronic ITP that has not responded to corticosteroids, IVIg or splenectomy, but not for patients with ITP ≤1 year. It remains to be clarified whether patients with ITP for ≤1 year, including either newly diagnosed ITP (<3 months) or persistent ITP (3-12 months) (Rodeghiero et al, 2009), differ in a pathophysiological way from chronic ITP. In all ITP phases, platelet kinetic studies have demonstrated an increased rate of platelet destruction, as well as inhibition of platelet production (Heyns Adu et al, 1986;Ballem et al, 1987; *Treatment-free remission defined as maintaining platelet counts ≥50 9 10 9 /l without any ITP treatments for ≥6 months. The denominator was chosen to be patients with 9 months on study (not necessarily 9 months of exposure) as an appropriate duration to escalate to a stable dose, titrate off, and then have 6+ months off romiplostim. Gernsheimer et al, 1989). This study shows that by increasing platelet production with romiplostim, platelet responses are similar between patients with ITP ≤1 year and those with ITP that has become chronic. The limitations of this exploratory retrospective analysis are clear. The individual studies had varying study designs (Table SI). We attempted to adjust for this by evaluating all patients with uniform diagnosis and outcome definitions and we presented the data as a function of exposure time. Additionally, non-responders are typically more likely to leave studies early, which would select for those who respond well. Approximately 25% of patients discontinued from the parent studies, so hopefully that factor does not unduly affect our conclusions. The placebo/standard of care group in the integrated analysis was too small to make any major statements beyond that it showed a higher rate of adverse events and 8 (3-18) 7 (6-9) AEs leading to D/C study drug 14 (4-32) 7 (5-11) 3 (0Á3-10) 6 (5-8) All data expressed as AE per 100 patient-years (95% CI). Bolded = 95% CI non-overlapping for PBO/SOC and romiplostim. A serious AE was fatal, life-threatening, required (or prolonged) hospitalization, resulted in significant disability/incapacity, or was another significant complication. AE, adverse event; CI, confidence interval; D/C, discontinuation of; ITP, immune thrombocytopenia; PBO, placebo; pt-yr, patient-year(s); SOC, standard of care. bleeding than the romiplostim group in each ITP duration subgroup. Nonetheless, the marked treatment effect of romiplostim as compared with placebo or standard of care has been shown in other prior studies (Kuter et al, 2008(Kuter et al, , 2010, and one can infer that the large treatment effects seen there apply to patients with either ITP ≤1 year or ITP >1 year. A major caveat is that comparisons between ITP duration subgroups are limited because patients with ITP >1 year may have more severe disease that is refractory to other therapies, as was seen in the greater number of prior ITP therapies in this subgroup of the analysis. Further, there were relatively few patients with ITP ≤1 year who received placebo or standard of care, and most of those patients did not have rescue medication use recorded (due to the standard of care study design), making comparisons across the treatment groups problematic. Comment should be made regarding some other comparisons between patients with ITP ≤1 year and those with ITP >1 year. Remission, defined as a treatment-free period of ≥6 months, was numerically more frequent in patients with ITP ≤1 year than in those with ITP >1 year (16% vs. 6%), which may reflect more severe and refractory disease for those with ITP >1 year. These rates may underestimate the true occurrence of remission because most of the studies followed standard dosing rules without a forced taper of romiplostim treatment. Only one study had a dose-tapering scheme that was designed to detect remission . In that study, which included only patients with ITP ≤1 year, 32% achieved remission after discontinuation of romiplostim treatment.
In each ITP duration subgroup, rates of adverse events and serious adverse events were lower for romiplostim than placebo/standard of care, and rates of thrombotic events were similar between the treatment groups. These findings are consistent with the identical rate of thrombotic events previously reported for romiplostim and placebo/standard of care overall (5Á5 per 100 patient-years for both groups) (Cines et al, 2015). Patients with ITP >1 year had a numerically higher rate of thrombotic/thromboembolic events than patients with ITP ≤1 year, but the 95% CIs overlapped. This increase could reflect greater disease severity in patients with chronic ITP. Other factors (e.g. age, gender, splenectomy status and other therapies, such as corticosteroids) could also influence the rate of thrombosis. In this analysis, rates of thrombotic events for romiplostim-treated patients increased with age, as has been reported previously (Ruggeri et al, 2014). Thrombotic events in romiplostim-treated patients were independent of platelet count, which may reflect both an increased risk of thrombosis in ITP and increased monitoring for adverse events in clinical trials as compared with epidemiological studies (Ruggeri et al, 2014).
In both treatment groups, bleeding rates were higher in patients with ITP >1 year than in those with ITP ≤1 year, possibly because patients with ITP >1 year had more severe disease that was refractory to other therapies. In both ITP duration subgroups, exposure-adjusted rates of bleeding were lower for romiplostim than for placebo/standard of care. As it was longer before patients receiving placebo or standard of care achieved a platelet response, most likely in response to standard of care (which was given to more patients than placebo), they also had a longer window in which they could have developed bleeding. We cannot exclude a preferential dropout from studies for romiplostim-treated patients experiencing bleeding, which may reflect the convergence of bleeding rates over time. Assessment of bone marrow fibrosis (collagen and reticulin) was hampered by the fact that most patients did not undergo bone marrow biopsy as part of the study protocol. However, a prospective bone marrow study showed a low rate (7%) of bone marrow reticulin in patients treated with romiplostim (Janssens et al, 2016).
In this post-hoc, retrospective analysis of patients with either ITP ≤1 year or ITP >1 year across nine clinical studies of romiplostim, the efficacy and safety profile of romiplostim was similar in both ITP duration subgroups. Romiplostim is approved for use in patients with chronic ITP, but the data presented here suggest that patients with ITP ≤1 year, including either newly diagnosed ITP (<3 months) or persistent ITP (3-12 months), are as responsive to romiplostim as are patients with chronic ITP (>1 year). Earlier treatment with romiplostim may also be associated with a reduced exposure to corticosteroids. Further confirmatory studies would be helpful to assess the effects of romiplostim treatment as firstline and/or combination therapy with corticosteroids or rituximab to reduce corticosteroid use, avoid splenectomy, and achieve remission in patients with newly diagnosed or persistent ITP before it becomes chronic.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Figure S1. Thrombotic events by age and ITP duration in the romiplostim group. Table SI. Parent studies included in the analysis by ITP duration. Table SII. Baseline platelet count and romiplostim exposure by ITP duration. Table SIII. Duration-adjusted AE rates to first event by ITP duration. Table SIV. Duration-adjusted incidences of thrombotic/ thromboembolic events by ITP duration. Table SV. Types of thrombotic/thromboembolic events with romiplostim by ITP duration. | 2019-03-11T17:19:37.277Z | 2019-02-21T00:00:00.000 | {
"year": 2019,
"sha1": "7761cd6fc748637427bfe861ad0cf71f4f23871b",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjh.15803",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7761cd6fc748637427bfe861ad0cf71f4f23871b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32783958 | pes2o/s2orc | v3-fos-license | Heterogeneity in testing practices for infections during pregnancy: national survey across Switzerland.
QUESTION
Detection and treatment of infections during pregnancy are important for both maternal and child health. The objective of this study was to describe testing practices and adherence to current national guidelines in Switzerland.
METHODS
We invited all registered practicing obstetricians and gynaecologists in Switzerland to complete an anonymous web-based questionnaire about strategies for testing for 14 infections during pregnancy. We conducted a descriptive analysis according to demographic characteristics.
RESULTS
Of 1138 invited clinicians, 537 (47.2%) responded and 520 (45.6%) were eligible as they are currently caring for pregnant women. Nearly all eligible respondents tested all pregnant women for group B streptococcus (98.0%), hepatitis B virus (HBV) (96.5%) and human immunodeficiency virus (HIV) (94.7%), in accordance with national guidelines. Although testing for toxoplasmosis is not recommended, 24.1% of respondents tested all women and 32.9% tested at the request of the patient. Hospital doctors were more likely not to test for toxoplasmosis than doctors working in private practice (odds ratio [OR] 2.52, 95% confidence interval [CI] 1.04-6.13, p = 0.04). Only 80.4% of respondents tested all women for syphilis. There were regional differences in testing for some infections. The proportion of clinicians testing all women for HIV, HBV and syphilis was lower in Eastern Switzerland and the Zurich region (69.4% and 61.2%, respectively) than in other regions (range 77.1-88.1%, p <0.001). Most respondents (74.5%) said they would appreciate national guidelines about testing for infections during pregnancy.
CONCLUSIONS
Testing practices for infections in pregnant women vary widely in Switzerland. More extensive national guidelines could improve consistency of testing practices.
Introduction
Untreated infections in pregnancy can cause substantial morbidity in pregnant women and the fetus or newborn. Transmission of infection can result from transplacental transmission, amniotic fluid infection or during labour. Adverse pregnancy outcomes include miscarriage, preterm labour, premature rupture of membranes, preterm birth, stillbirth and perinatal infectious complications [1][2][3][4]. Consequences for the fetus and newborn range from asymptomatic infection to sepsis, fetal malformations and fetal death [5,6]. Several infections are acquired through sexual intercourse (human immunodeficiency virus [HIV], syphilis, chlamydia, gonorrhoea, hepatitis B virus [HBV] and herpes simplex virus [HSV]) so there are implications for sexual history taking and partner treatment. HBV, varicella and rubella infections are preventable with vaccinations, if administered before pregnancy. Other infections like cytomegalovirus (CMV), parvovirus and toxoplasmosis can be tested for if abnormalities occur during pregnancy, but there is no established treatment during pregnancy for these infections. Policies and practices for testing pregnant women for infections during pregnancy differ between countries, depending on the incidence and prevalence of infection, on historical precedents, on interpretation of the research evidence and on the type of health care system [7][8][9]. To date, there is no European consensus on routine testing for infections during pregnancy. In Switzerland, there is no single guideline about testing for infections in pregnancy and no single body responsible for guidelines. The Federal Office of Public Health (FOPH) and the Swiss Society of Gynaecology and Obstetrics (SSGO) have issued recommendations for specific infections: to screen all women for hepatitis B, HIV, group B streptococcus, varicella and rubella [10][11][12][13][14][15]; further to screen pregnant women for syph-ilis [16], and not to screen for toxoplasmosis [17]. Testing practices in antenatal care and adherence to national recommendations have not been investigated at a national level in Switzerland. This survey was designed to describe current practices followed by gynaecologists and obstetricians in Switzerland, to identify regional differences and to determine factors associated with testing for specific infections during antenatal care.
Methods
A multidisciplinary team of gynaecologists/obstetricians and specialists in infectious diseases, paediatrics and public health at University Hospital Bern, Cantonal Hospital St. Gallen and the Institute of Social and Preventive Medicine in Bern in Switzerland developed the survey "Testing for infections during pregnancy in Switzerland". We used a web-based questionnaire created with an online application (SurveyMonkey ® ). The questionnaire was available in three languages (German, French, Italian) and the questions were pretested by 50 gynaecologists at the Bern University Hospital to assess readability and content validity. A total of 12 questions were included to characterise the responding doctors and their infection screening practices. The information collected about the doctors included the number of pregnant patients attended per year, place of work (private practice, hospital), year of medical board certification (specialisation) in obstetrics and gynaecology, region of work in Switzerland (according to the Swiss Federal Statistics Office [18]: Eastern Switzerland, Zürich region, Central Switzerland, North-western Switzerland, Midland region [Bern, Solothurn, Neuchâtel, Jura and Fribourg], Lake Geneva region [Geneva, Vaud and Valais] and Ticino) and gender of the physician. In addition, we asked whether doctors took a sexual history from pregnant women at increased risk of sexually transmitted infection (STI), defined as having more than one partner and unprotected vaginal intercourse with different partners during pregnancy. Clinicians were then instructed to indicate the strategy they applied, based on their clinical practice, for testing for each of the following infections (in alphabetical order): bacterial vaginosis (BV), Chlamydia trachomatis (chlamydia), cytomegalovirus (CMV), gonorrhoea, hepatitis B virus (HBV), human Immunodeficiency virus (HIV), herpes simplex virus (HSV), hepatitis C virus (HCV), parvovirus B19, rubella, group B streptococcus (GBS), syphilis, toxoplasmosis and varicella zoster virus (VZV). There were five mutually exclusive categories of testing strategy: universal screening, testing women at high risk (multiple sex partners, a history of injecting drug use or any other risk judged relevant to the infection), testing if the pregnant woman had clinical symptoms or if there were fetal signs (e.g. on ultrasound), testing at the request of the women, and not testing at all. The initial survey invitation was distributed to all clinicians who are active members of the SSGO, which covers 98% of all practicing obstetricians and gynaecologists in Switzerland. The invitation e-mail contained a short description of the survey as well as the web link for the online survey. Two reminders were sent out over a total time period of 5 months between February and June 2015. This study was exempt from approval of the ethics committee, as it did not include patients' data but collected anonymous information about the views and practices of healthcare professionals only. Statistical analyses were conducted using STATA 12 (Stata Corporation, College Station, TX). We described the frequency of testing strategies for each infection as percentages, excluding missing values. We considered region of work as the main exposure variable and analysed overall differences between regions with chi-square tests. Three outcomes were analysed with logistic regression models: testing according to recommendation (i.e. testing all women for HIV, HBV and syphilis), not testing for toxoplasmosis, and inquiring about a sexual history. For each outcome, we examined univariable associations with region of work, workplace (hospital or private practice), years since specialisation and gender. Results are expressed as odds ratios (OR) and 95% confidence intervals (95% CI). For testing according to recommendation a multivariable model of the association between region of work and testing practice, controlling for workplace, years since specialisation and gender. We present the probability of testing in each region (with 95% CI), after controlling for potential confounding. We present results about screening practices in two groups: infections that are mentioned in guidelines and infections that are not mentioned.
Characteristics of participants
A total of 1138 clinicians were sent an email invitation and 537 (47.2%) responded. Seventeen of 537 (3.2%) the responding doctors were not seeing pregnant patients and thus did not answer further questions. The characteristics of the 520 eligible participating doctors are shown in table 1. The distributions of eligible participants according to sex and region of work did not differ from those of all members of the SSGO (see appendix, supplementary table S3).
Sexual history taking
Only 94/515 (18.3%) of the respondents reported that they took a sexual history from all their patients during antenatal care and 31.7% never did so (table 1). Overall, 137 (26.6 %) asked only single women about their sexual risks and 98 (19.0%) asked only at the beginning of pregnancy. Male doctors were more likely to ask about sexual history and sexual risk (26.4%) than female doctors (16.1%; OR 1.87, 95% CI 1.19-2.96, p = 0.007) during antenatal care. The evaluation of sexual history was not associated with region of work, years since specialization or place of work (supplementary table S1).
Infections mentioned in guidelines
We found that nearly all respondents reported that they tested all women for GBS (98.0%, 502/512), HBV (96.5%, 497/515) and HIV (94.7%, 479/506) during antenatal care, in accordance with national recommendations ( fig. 1). Amongst these infections, the time at which testing was done varied most for HBV with 55.5% (287/517) reporting testing during the first trimester, 7.5% (39/517) in the second and 31.3% (162 /517) routinely in the third trimester. Syphilis testing for all pregnant women was reported by 80.4% (410/510) of respondents and 10.6% (54/510) said that they tested women at high risk of infection; only 4.7% (24/510) reported that they never test for syphilis during pregnancy. There were geographic differences in the proportion of clinicians that tested all women for syphilis (table 2, fig. 2); more than 90% of physicians in most re- gions (96.6% in the Geneva region and 92.0% in Northwestern Switzerland) but only 65.4% in the Zurich region and 73.3% in Eastern Switzerland (p <0.001). In these regions 10.4% and 7.9%, respectively, reported that they tested only women at high risk of syphilis. For women with positive syphilis serology test results, 45.1% (227/ 503) doctors reported that they send their patients to infectious disease specialists for antibiotic treatment. When we focused on doctors who reported that they tested all pregnant women for HIV, HBV and syphilis, there were no differences according to place of work (hospital or private practice), year of specialisation or sex of the participant compared with doctors who did not test all women. There was a difference between regions, with lower rates of reported testing in Eastern Switzerland and Zurich region (69.4% and 61.2%, respectively) compared with the other regions (range 77.1-88.1%, p <0.001). This difference remained after adjusting for doctor's gender, place of work and year of specialist certification ( fig. 3, 2). Hospital doctors were more likely than doctors working in private practice not to test for toxoplasmosis (OR 2.52, 95% CI 1.04-6.13, p = 0.04), but there was no association with gender or year of specialisation.
Infections not mentioned in guidelines
For genital tract infections, 65.0% (333/512) respondents reported testing all women for BV, 49.7% (252/507) for chlamydia and 18.3% (91/498) for gonorrhoea. There were geographical differences in testing for all three infections. In general, doctors in German-speaking regions and the Ticino reported testing all women for these infections more often than doctors in the French-speaking region. Doctors practicing in hospitals were more likely than those in
Discussion
This is the first national study in Switzerland to have assessed obstetricians' and gynaecologists' testing practices for 14 infections during pregnancy. More than 90% of respondents reported that they tested all women for HIV, HBV and GBS and 88% for rubella, in accordance with national recommendations. Overall, 80% of respondents reported testing all women for syphilis. We found that 24% of respondents reported testing all women for toxoplasmosis, 7 years after the publication of recommendations not to screen. Our study demonstrated geographical variations in testing strategies for several infections. Most respondents said that they would like to have national guidelines on testing for infections in pregnancy. The strength of this study was the large sample of practicing obstetricians and gynaecologists in Switzerland. The participation rate was 47% and respondents were representative of all SSGO members according to sex and region of work. Limitations of the survey include the potential for participation bias. If respondents are more likely to be those who test for infections, we will have overestimated the coverage of antenatal screening for infections. Reported levels of testing do not necessarily correspond to testing of individual women in practice. These levels of testing might also be overestimated if respondents felt that they were expected to say that they tested. The anonymous nature of the survey should have reduced such a bias.
Adherence to national recommendations for universal screening in pregnancy
We found very high adherence to specific recommendations published on the SSGO website stating that all pregnant women should be tested for GBS [11], HBV [10] and HIV [11]. Studies in Switzerland have shown successful prevention of mother to child transmission of GBS by intrapartum antibiotics in women with GBS colonisation [19] and very low levels of early onset sepsis with GBS in newborns (0.12/1000 live births) [20]. Testing for GBS is almost universal practice in Switzerland according to our survey results. This might be explained by a clear Swiss recommendation for GBS screening in pregnant women and is an example of effective implementation of a national screening strategy that was introduced by the SSGO.
International recommendations for universal screening: global campaign to eliminate congenital syphilis
Our results show that antenatal syphilis screening could be improved in Switzerland. There are large geographical variations, which might suggest that doctors in some regions screen all women but others only test women at high risk of syphilis. The World Health Organization (WHO) aims to eliminate mother to child transmission of syphilis and recommends screening of all pregnant women to allow timely diagnosis and treatment of infected women and their partners [21]. The number of infectious syphilis cases in pregnant women in Switzerland increased between 2006 and 2009 [16]. Furthermore, the United States Centers for Disease Control and Prevention reported an increase in cases of congenital syphilis in 2014 [22]. This finding underlines the importance of screening all women for syphilis during pregnancy.
Screening for rubella and varicella and opportunities for vaccination
We found 88% of Swiss obstetricians and gynaecologists test all pregnant women for rubella antibodies, as recommended by the SFOPH. Serological testing might reflect the fact that many women do not have their immunisation records. According to a serological study of women who gave birth, rates of seronegative status for rubella were as low as 3.2% in Swiss/German/Austrian women [23] compared with 7.8% of patients from other European countries. Even with low seronegative rates, the postpartum period gives an important opportunity to vaccinate seronegative women for rubella and varicella in order to provide protection in subsequent pregnancies.
Withdrawen recommendations for screening: toxoplasmosis
In 2009, Swiss recommendations about screening for toxoplasmosis changed and stated that routine screening for toxoplasmosis in pregnancy should not be done [17]. Instead, hygienic measures (e.g. avoiding consumption of raw meat) are recommended for all pregnant women. The main reasons for this decision were the low prevalence of toxoplasmosis in Switzerland, the limited options for therapy following seroconversion during pregnancy, and the low specificity of toxoplasmosis serology testing resulting in many false positive tests. False positive test results lead to unnecessary anxiety and might even result in termination of pregnancy of a unaffected fetus. Our survey results indicate that nearly a quarter of clinicians still test all pregnant women for toxoplasmosis, and one third are still testing at the request of the patient. Neighbouring countries, especially France, still recommend screening for toxoplasmosis during pregnancy [24]. Accordingly, the highest proportion of doctors testing for toxoplasmosis was found in the French-speaking part of Switzerland (48%). These findings show the difficulty of changing practice after a shift in screening recommendations.
Controversies about screening for infections in pregnancy: bacterial vaginosis and chlamydia
Heterogeneity in testing practices for infections in pregnancy can reflect ongoing debates about the effectiveness of screening. Testing rates for BV in asymptomatic pregnant women varied geographically, from a quarter in the Geneva region to more than 80% in the Zurich region. Although the correlation between BV and severe complications in pregnancy and postpartum (preterm birth, late miscarriage, postpartum endometritis) is well established, the benefits of treatment of BV in pregnancy have been debated because of inconsistent study results [25]. Nevertheless, if BV is treated early in pregnancy (before 16 weeks), the risk for preterm birth might be reduced [26,27]. Furthermore, the prevalence of BV in Switzerland has been shown to be as high as 32% [28]. Similarly, observational studies show associations between chlamydia infection in pregnancy [2,4] but the lack of randomised controlled trials [29] perpetuates debate about whether routine screening of asymptomatic pregnant women reduces these outcomes [30]. Screening of all pregnant women for chlamydia is recommended in countries such as the USA [31], Estonia, Germany and Latvia, but other countries such as the UK actively recommend not screening, based on an evidence review [30].
History taking for sexual risk and sexually transmitted infections
Adverse outcomes of STIs such as chlamydia, gonorrhoea, syphilis, HBV, HIV and HSV in pregnancy are well documented [1][2][3][4]. For example, the risk of HIV transmission to the newborn is highest in women with incident HIV infection during pregnancy [32]. Numbers of reported cases of syphilis, chlamydia and gonorrhoea are increasing in Switzerland [33]. Acquisition of these infections is associated with the number of current and past sexual partners, with non-use of condoms and with the behaviours of sexual partners [34]. Sexual history taking is a sensitive topic, especially in pregnant women as they often attend consultations together with their male partner, so assessing risk in regard to STI is challenging [35]. This survey shows a need to improve sexual history taking during pregnancy, especially since levels of sexual risk assessment were low, irrespective of the time since specialist certification.
Original article
Swiss Med Wkly. 2016;146:w14325 New evidence from research: perinatal treatment for HCV and HBV Hepatitis C screening is carried out by 40% of our survey respondents. Until now, there have been no measures to reduce transmission risk of HCV (about 6%) to the newborn during pregnancy and delivery [36]. New directly acting antivirals, which are able to clear HCV infection in more than 90% within 10-12 weeks of therapy, should stimulate a re-evaluation of testing and treatment strategies before and during pregnancy [37]. Treatment of the mother in the pregnancy interval or before conception could avoid any HCV exposure of newborns of subsequent pregnancies [38]. A recent UK study suggests that general HCV screening during pregnancy could be cost effective [39]. Timely HBV screening during pregnancy has important implications for prevention of mother to child transmission. Nearly all doctors in our survey routinely test for HBV (96.5%), which is recommended by the Swiss national health authorities and the SSGO. Treatment of HBV-infected pregnant women with high viral load reduces mother to child transmission in both actively and passively vaccinated newborns [40]. Therefore, women positive for HBV surface antigen should be assessed for viral load and offered treatment during the third trimester of gestation. This is not yet included in the Swiss national guidelines, which were last published in 2007 and should be updated.
Conclusion
Our survey reached nearly half of all registered obstetricians and gynaecologists in Switzerland and found that three quarters would appreciate clear guidelines about testing for infections during pregnancy. National guidelines for HIV, HBV and GBS testing show that wide dissemination and clear recommendations allow high and consistent implementation.
We conclude that measures should be taken to provide a consistent format for recommendations for all relevant infections in pregnancy. Such recommendations should be evidence-based, straightforward and transparent, as well as practice-and patient-oriented, in order to achieve high adherence by doctors. As an additional benefit, a more uniform national strategy with high adherence will allow regular evaluation of the current national epidemiological situation of infections in pregnant women. The results of this survey can be used by healthcare authorities to make decisions about the evaluation and implementation of antenatal infection screening programmes in Switzerland. More extensive national guidelines could improve consistency of testing practices.
Appendix: Heterogeneity in testing practices for infections during pregnancy: national survey across Switzerland
Figure 3
Probability of reporting testing of all women for human immunodeficiency virus (HIV), hepatitis B virus (HBV) and syphilis, by region adjusted for place of work, year of specialist certification, and gender. The dashed line indicates the overall mean, boxes are the point estimate, bars are 95% confidence intervals (CIs). | 2018-04-03T02:13:49.651Z | 2016-07-11T00:00:00.000 | {
"year": 2016,
"sha1": "4534563eae6fa10534164548e8c1d2c3dd2c6e7a",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4414/smw.2016.14325",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "29b343f2b62599cf7090009e60ede8d3505580ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249451978 | pes2o/s2orc | v3-fos-license | Evolving Technology in Arts, Fashion and Textile Design
Art an expressive medium uniquely human, involving conscious use of skill and creative imagination. Technology is reinventing how art and designs are made which is more efficient, faster and more accurate. The embracing of technologies by the textile world has accelerated the consumption of textile products in the 21st century compared to the previous centuries. It has made it simpler for different brands in the fashion industry to display their styles and vogues to a larger audience while the emergence of designing applications has made designs unique. Using the art of the 1990s as an example, there are obvious differences and improvements in the means of expression and content compared to the art of the 21st century. The increase in the use of computers and the internet made it simpler for people's interactions and sharing of ideas which contributed to growth in the industry. This paper identified how information technology has enhanced the growth of arts and textile industry.
INTRODUCTION
In the age of mass production, skilled workers must adapt to maintain their role in the textile industry. Skilled workers in this context can be regarded as craftsmen with extensive knowledge of textile production and textile-related objects and using the tools currently available. As
Review Article
machines became more powerful, these machines were able to produce textiles in less time and less money, making it harder for individuals to keep up economically and on time. Technology has driven the growth of human society in a myriad of ways. From basic food, clothing and shelter needs to advanced robotics and healthcare, technology has rapidly embraced the status of undoubtedly essential and highly effective tools in our time. Clothing has always been a necessity associated with human civilization and a means of expressing one's culture and interests. Social status, religious tendencies, cultural diversity and professional status can be comprehensively reflected in clothing. Supported by powerful technical tools that complement and shape the creative ideas of fabric designers, a variety of clothing styles are possible. [1,2] In recent years, design has evolved and improved so much that design companies and textile factories are now using it. Textile printing allowed manufacturers to create digital patterns and paint designs before the old method. While it's clear that technology is influencing the style of design produced by textile artists, we are using digital technology to speed up their design process [2].
As a result, stylists have maintained a higher level of design quality and aesthetics by continually improving the design in the printing process. The use of photographic images, digital layering of images, and the complexity of colours and tones require both knowledge and hands-on experience with the required software [2].
Technology in the Arts
In Art, the technological advancement from the first paintings to the production of musical instruments and modern films would simply be impossible without the aid of historical archives of technological advancement. Throughout history and modern times, technologies ranging from ink, paper and glass to cameras, microphones and computers have enabled new forms of art. Without them, it would be impossible to realize paintings, ornaments, photographs, movies, and modern digital works that fill our museums and galleries Boucher [3].
Technology has helped to broaden the perspective and prospect of artists' inventiveness while limiting the problems encountered. It has made the production of art a lot less exhausting, and as a result, artists now have more time to contemplate and expand their creativity Jin et al, [4].
According to Boucher 2019, Technology and the arts are broadly considered distinct sectors of modern society with some important links similar to those between the commercial, industrial and legal sectors.
Technology in Textile Industry
Today, textile artists need to adapt to the availability of machines such as laser cutters, knitting machines, and computerized looms. When textile tools became more automated and computerized, a division broke out within the industry which created separate roles for the artist and the machine operator [5]. The artist may choose both roles in the textile industry, making his practice more efficient. However, not all textile artists have this advantage, as these machines are costly and often require skilled hands to operate without damage. According to the Los Angeles Times, computer-controlled knitting machines range in price from $ 10,000 to $ 190,000. At the top of this price range, the machine is very reliable and fast. Shima Seiki Seisakusho is a company that sells various types of high-end computer knitting machines. This is one of the best manufacturers in the field. Older used models from inferior brands can be found on eBay for just under $ 1,000 [6].
The textile and apparel industry is one of the largest industries in the world market, considering the many components of the supply chain for converting raw materials into products and providing them to end-users. RUPPERT-STROESCU [7].
Computers and information technologies that facilitate the transmission of information are integrated at every stage of the textile and apparel supply chain, from design, manufacturing and distribution to marketing, sales and consumption. Technology is an integral part of today's business world. RUPPERT-STROESCU [7].
Technological changes have transformed the manufacturing process of fabric preparation, cutting, material handling, fusing, sewing, pressing, finishing and dyeing by bringing new advanced machines into the industry.
Technology has made the textile and apparel sector tailor different innovations to meet customer demands. Nazururu Islam [8].
Digital technology is a computer-assisted technique for developing textile design and textile patterning mechanisms that transform an artist's visual information into a final presentation. The computer is operated by a textile designer or technician who understands the particular textile machine that the original drawing can work with. The designer or technician inputs the original drawing through a combination of graphical input devices and traces the drawing of a freehand print. After the original design is in the core, it is developed into the visual information to control the patterning mechanism of a specific kind of textile design. For example, the woven design should depict the weaving of each of the warp and weft. The knitting design should represent each stitch of the knit mesh. Also, the design you print must represent the areas of each colour as separate images [6].
Use of Computers IN Textile and Industry
Computer-aided design (CAD) in the textile industry is the technology used for designs and technical documentation which has replaced the drafting of designs manually. CAD was originally used to produce high precision machines, but it has penetrated other industries, especially in the 1970s when it gained entry into the textile and garment industry.
Today, most foreign companies are integrating some form of CAD into their design and manufacturing processes. The level of technology related to the automation of textile machinery has advanced significantly and the domestic approach to the technology of machinery manufactured in industrialized countries is experiencing various modifications. Significant and sustainable efforts have been made to strengthen domestic efforts and technical support, and today major manufacturers supply the latest machines.
Textile technology, once considered a handicraft has evolved into sophisticated scientific and technological activities involving new types of yarns and techniques of production. This area includes various fields of engineering such as mechanical, electrical, computer, chemistry, instrumentation, electronics, and civil engineering. Garment and fashion technology, which is part of textile technology, has become crucial in the activity of clothing design, creation and marketing. All of this requires knowledge of the latest technology and today's textile design students are ready to take on the challenge. Sudalai muthu et al. [9].
Over the last two decades, major changes have occurred in the production processes of textiles. These changes are due to global environmental factors, including technological, economic, social, demographic, political, and legal factors. The most crucial changes are related to the introduction of new machines and advanced manufacturing systems. The modernization of technology has transformed the traditional production system of textiles and garments into a modern system. Nazururu Islam [8]
Early Inventions that Transformed the Textile Industry
The Industrial Revolution was one of the most important factors in transforming the textile industry. Fabric production increased rapidly as it created new machines that allowed fewer workers to do more and less time [10].
During the transition from the 18th century to the 19th century, there was an accelerated development of new technologies and methods that changed the textile industry. With the spread of machines in factories, production volume has increased dramatically.
These huge factory-style brick buildings were quickly becoming very popular as people shifted from handlooms at homes or businesses to these new machines, which increased production time by over seventy times faster than some artisans could do themselves. This led to many more changes, such as an increase in wages because of more jobs being available and improved living standards through improved working conditions since workers were employed full-time with time off on Sundays and holidays [10,11].
These huge factory-style brick buildings became popular as soon as textile designers switched from handlooms to these new machines, which increased production time by over seventy times faster than some craftsmen could do on their own. This also led to many more changes, such as an increase in wages because of more jobs being available and improved living standards through improved working conditions since workers were employed full-time with time off on Sundays [10,11].
Discussed below are three of the top and exciting inventions that caused a significant transformation in the textile industry: The Cotton Gin: Cotton gin has made it easier to clean seeds from cotton fibre seeds, making cotton a popular source of fibre. This can be compared to the scientific revolution in which institutions like Essay Mama exist today to help write essays and create the best essays to help schools and increase the chances of future success.
The Spinning Jenny: The spinning jenny has made it possible to produce more yarn without the need for more workers, and fabric production has increased rapidly.
Printing Presses: Printing presses have led to an increase in printing fabrics because they can be easily and quickly duplicated on a machine rather than manually on canvas. It was also much cheaper than previously manufactured, making it quicker and easier to sell. [Fiber processing. http://textillearner.blogspot.com].
The Role of Technology in Fabric Design and Fashion
State of the art technological advancement in the digital age revolutionized the textile and fashion industry by molding the future to accommodate the realities of today. The advanced innovations have changed the development patterns upsetting the apparel business worldwide to offer an astonishing cluster of benefits remodeling the fashion design business scene Fashion is an extension of one's identity and is about transforming self-esteem into a personal style. Today's fashion cycle is driven primarily by the Internet. When a style was causing hysteria in the market, fashion designers began to create a new range of looks. This affects the fashion cycle, which causes fierce competition. Recent advances in digital technology have had a major impact on fashion e-commerce and fashion retailing. Innovation has continually made great strides in the smart fashion wearable industry. Therefore, fashion and branding have become ubiquitous in today's society [12].
Fashion is a system of bodily displays derived from costumes that go beyond jewelry, luggage and perfume shades to a broader definition of luxury goods. Fashion is usually characterized as both forms of everyday wear and as a luxury rather than functional needs. Another distinctive feature that the noun "fashion" gives us is that it shows a constantly changing value system that an item can be considered to be in or out of fashion. Therefore, symbolic revaluation of its cultural and economic values changes rapidly. Fashion is time-based and culturally located. It is a combination of design and innovation, so its quality is defined contextually and relationally, not absolutely. In this sense, fashion can be part of any product. Some argue that this aspect is becoming an important part of explaining the growth and change of the industry, as the design and fashion elements of all products are increasing.
Fashion and Technology: Today's fashion sector is full of creative and innovative trends, including business model changes, new communication strategies, new consumption patterns, new production technologies and materials. Importantly, these new trends are primarily the result of the integration of fashion systems with current technological advances. Business model innovation Technology and fashion are an inseparable combination. Technology has become an integral part of our products, while it affects textile production and packaging, communication and distribution, and changes the entire production process. Recent technological and infrastructure developments in e-commerce have led to the development of new online business models in the fashion and luxury segments. The following describes the most innovative models: personal subscription, social merchandising / cloud production, mass customization, and collaborative consumption [13].
In this technological revolution, where computers have replaced much of the designer's manual work, fashion designers are in a situation similar to the artists at the dawn of the Industrial Revolution.
From the invention of sewing machines to the rise of e-commerce, fashion is always at the forefront of innovation. As technology, fashion is future-oriented and cyclica [14]. One of the most important consequences of the famous Industrial Revolution was the mechanization of textile production. Power looms and mechanized spinning mills have significantly increased production and reduced production time many times over. Recently, more new materials have been designed, which improves both the quality and compatibility of the fabric [15].
The so-called smart materials have enabled the production of clever fabrics, with high-end technology like atomic force microscopy and polymeric Nano fibres going into the design, manufacturing and testing of these fabrics. Ranging from special applications like the design of suits for space travel, swimsuits and suits for military purposes to the usage of more durable and adaptable garments for daily use, these methods have proved to be extremely effective. Several research centres have sprung up to investigate these exciting possibilities. The role of computers is unarguably prominent in fashion technology [16]. The visualization of the final design right at the conceptualization stage, down to the finest detail, making suitable modifications if so desired, automating several stages in the manufacturing process and finally executing quality control procedures all of them involve computing at various levels of complexity. Whether you're using CAD to create appreciable designs or run computer-controlled knitting and weaving machines, automation is fast becoming a buzzword. In recent fashion courses, the curriculum always includes a great technical focus to help aspiring designers keep up with the latest trends [17].
Having a single piece of clothing that can be worn in both summer and winter and that changes texture, colour and even shape depending on the external environment will be a great achievement in the fashion industry. All of this gradually transcends from mere fantasy to reality. Digital clothing, which integrates sensors into the clothing itself, and other attractive possibilities are opened primarily through interdisciplinary efforts with fabrics and fashion technology. At a more mediocre level, you can be pleased when you consider making clothes that are much faster and several times more durable to the exact specifications [16].
CURRENT TRENDS
According to Kochar 2022, Social media is changing how fashion is consumed and has trained customers to want instant access to the latest trends, as soon as they hit the catwalks and at the same time. The prevailing inclination or drift in art, fashion and textile design have made products tailored to the needs and taste of the younger generation who wants to emerge different which brought about different innovations.
Biotechnology Techniques
Biotechnology is based on DNA technology that leads to enzyme synthesis to save energy, time, and most importantly resources such as water. With this advanced technology, the manufacturing industry has reached a new horizon with endless opportunities for success and productivity. In this era, biotechnology plays an important role in saving the planet and making it more sustainable and safe for future generations [19].
Textile biotechnology deals with innovative and advanced technologies applied to textile fibre composite structures developed for use in specific design industries. This is an updated performance-based technology that has resulted in the development of many new high-tech fabrics with high-performance properties such as Water and dirt repellent, shockproof, lightweight, temperature control, etc. [https: // www .fibre2fashion.com,http://textillearner.blogspot.co m].
Presently, Biotechnology is a dynamic force in the design industry even though it has been used in multiple domains such as textiles, medicine, agriculture, fashion and design.
Textiles, mainly integrate natural and synthetic materials and It has developed the enormous advancement of multiple properties in one material, that is beneficial to designers in many ways, including Climate-based materials used for clothing, home fashion, luxury cars, and usually outdoors. Biotechnology is playing a crucial part in terms of innovations like.
Self-cleansing Surface
The huge impact Self-cleansing fabrics have made in the fashion and design industry has made the appearance of fabrics better by repelling dirt, easy to clean and not easily soiled.
Naturally coloured cotton
Who is better at visualizing natural dye cotton than designers? One of the great innovations in biotechnology is the production of naturally coloured cotton by genetic engineering, although the range of colour is limited but it may be very interesting to see primary, secondary and tertiary cotton fields in the future. The world would be a much better place without dyes and pigments that are very harmful to human health and the environment.
Animal Fibre
There are biotechnological vaccines that are injected into sheep to get valuable wool for outerwear. A break occurs for a period of time, and the wool fibres can be peeled off. This procedure takes half the effort to cut sheep's hair. Another major advance is the scorpion goat wool, which can withstand very high temperatures which are used to make astronaut spacesuits [9].
Nanotechnology Techniques
Nanotechnology is one of the innovations during the era of the industrial revolution which involves the properties of materials being drastically amended after they are reduced to the NANO scale. The NANO Technology is treating the textile materials by coating them with NANO materials to improve the properties of the fabric and creating additional sturdy. It's encouraging news for designers that once you see the NANO particles through correct instrumentation, it changes their colour at this NANO level. Innovations in NANO technology have modified the business facet of all, associated with style, fashion and the textile industry [20]. By NANO technology techniques textile sector functionalities, have been modified because the innovative high-performance properties came to exist. Stain repellent, Water repellent, ultraviolet light protected, opposing static, Wrinkle-free, opposing microorganism, hearth retardant, Bio-degradable, Bulletproof and defence article of clothing to call some [21].
Genetic Engineering Techniques
In Genetic Engineering techniques, the key development is that the coloured Florescent Silk class of fabric, that in some way look terribly trendy and style orientated because the material is appealing to style. Researchers have used this technique by inserting glowing proteins taken from Corals and Jellyfish, into the silk worm Genom. A result of this genetic engineering transformation is that the properties of the fabric are additional or like the same as silk however it becomes slightly weaker after processing. The process involved in Genetically Engineered silk is captivating and will pioneer modern ways of utilizing these materials in an innovative, productive and viable ways in the area of fashion and textiles. "SYNTHETIC SPIDER SILK is one of the latest innovations in Genetic Engineering with the additional property of being biodegradable [22].
Artificial Intelligence
In the Textile and fashion design industry, Artificial Intelligence, (AI) is enjoying an important role which has made the duties of designers become a lot more crucial so much more that they will need to equip themselves with consequent generation's tools and technology [23].
In terms of Fashion design, AI tools are vital in this era of technological advancement. Because it is challenging for the designers to check several season collections about the newest trend and keep up with the data collection and sorting. AI has come back up with this resolution, as there's entire information for all the previous collections and an unimaginable quantity of knowledge is on the market with the press of a button. In this case ancient method to design or style remains the same to do analysis, assembling materials, creating prototypes, etc. However to catch the newest technology development, designers will be able to master the new tools to enhance design processes [24].
Kochar [18] opines that AI is being used by brands to boost customers' searching expertise, analyze data, boost sales, forecast trends and provide inventory-related steering. These technologies have accentuated AI as the way forward for development in the fashion industry.
CONCLUSION
Art, fashion and textile design technologies have a deep and exceptional connection that goes through all phases of evolving growth. It also created an advanced environment for social and business synergies that helped innovators and producers to address the challenges of the day. This technology is very expensive and is compatible with all its applications. Textiles and artistic design have much potential and continue to be a lasting example of the power of technology. | 2022-06-08T15:07:49.269Z | 2022-06-03T00:00:00.000 | {
"year": 2022,
"sha1": "4afab5eb815cae229856a98ccc75b51735a75dae",
"oa_license": null,
"oa_url": "https://journalarjass.com/index.php/ARJASS/article/download/30304/56856",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e6538e61bd97d000d18fef668ef55a7c200fe43",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
158281538 | pes2o/s2orc | v3-fos-license | Nitrate Vulnerable Zones Revision in Poland — Assessment of Environmental Impact and Land Use Conflicts
Despite concerted efforts through the European territory, the problems of nitrogen pollution released from agricultural sources have not been resolved yet. Therefore, infringement cases are still open against a few Member States, including Poland, based on fulfilment problems of commitments regarding the Nitrate Directive. As a result of the litigation process, Poland has completely changed its approach to nitrate vulnerable zones. Instead of just selected areas, the measured actions will be implemented throughout the whole Polish territory. Additionally, further restrictions concerning the fertilizer use calendar will be introduced in areas indicated as extremely cold or hot, based on the average temperature distribution (poles of cold, and heat). Such a change will be of key importance to farmers, whose protests are already audible throughout the country, and can be expected to intensify. To assess the impact of the introduced modifications a modelling approach has been adopted. The use of the Macromodel DNS/SWAT allowed for the development of baseline and variant scenarios incorporating details of stipulated changes in the fertilizer use for a pilot catchment (Słupia River). The results clearly indicate that the new restriction will have a substantial effect on the aquatic environment by altering the amount of released total nitrogen.
Introduction
Nitrogen (N) is the most important nutrient controlling agricultural primary production.To support a rapidly growing population and its food demand this production requires the external application of N in form of fertilizers.Despite introducing new N use management techniques, e.g., precision agriculture, or synchronizing of N application with crop demand [1,2], considerable amounts of N are still being lost to the environment, contributing to surface and groundwater pollution, eutrophication, and their subsequent effects on aquatic ecosystems.The anthropogenic nutrient over-enrichment is considered a main problem in many regions of the world [3].In Europe the average use of N fertilizers has substantially grown in recent times, from 44.9 kg N/ha in 2004, up to 49.8 kg N/ha in 2015 [4].
To prevent and reduce water pollution from nutrients arising from agricultural sources, the European Union (EU) introduced a range of control measures, with general rules set by the Nitrate Directive (ND) (91/676/EEC).This document obliges Member States to designate areas vulnerable to nitrate pollution and to concert the efforts to reduce this pollution through national action programmes.These programmes require land managers to follow a range of measures, such as, among others, controlling the timing and quantities of fertilizers applied to the land, and ensuring proper storage capacity for livestock manure.It has been already concluded that implementation of the ND decreased both N leaching losses to ground and surface waters, and gaseous emissions to the atmosphere in EU-27 [5].However, the problem of nutrient pollution has not been completely solved in the EU so far [6].
Poland is one of the countries located in the Baltic Sea (BS) catchment.Due to its proportion of the catchment area (ca.18%), and very large share in agricultural land area (ca.50%) [7], it is considered as one of the main contributors to the excessive nutrient loads into this sea.Indeed, despite the concerted actions in all the Baltic countries to focus on reduction of nutrient loadings, 96% of the total surface area in the BS remains affected by eutrophication [8].Nutrient loads from Poland are being discharged mostly from riverine sources, via two main rivers (Vistula and Oder), draining ca.88% of the country's territory, and estimated under the National Environment Monitoring Program [9].A substantial downward trend has been observed in the total nitrogen (TN) load from the Polish territory, from 262 thousand tons in 1995 to 170 thousand tons in 2014 [10].However, it should also be noted that 9 small rivers from the Pomerania region, carry their waters directly into the BS.These rivers, including the Słupia River-which is the one used as a pilot catchment in this study, are not monitored for nutrient loads [11].
Of the total area of Poland, agricultural land occupies ca.60% of the surface, with the industry profile dominated by private farms [12].The restructuring trend observed in Polish agriculture after 1990, also enforced by adoption of EU laws, resulted in a general decrease of pressures on aquatic environment quality (e.g., improvement of sewage system management) [9,13].However, this resulted in Polish farmers striving to achieve competitiveness in the EU agricultural market and, thus, increased the use of fertilizers (from 47 kg N/ha in 1995 to 72 kg N/ha in 2016) [14,15].Since agriculture production, as well as fertilizer use, shows strong regional differentiation in Poland, based on the variability of natural and economic factors [12], the measures to approach nutrient discharge problems should be different.The paper aims to present possible changes in the N load discharge from Polish catchments under the new measures imposed by the EU.To assess possible changes for the aquatic environment a modelling approach was adopted, allowing analysis of different variant scenarios of fertilizer use.Moreover, the 15-year period of meteorological data used in this study also allowed to draw conclusions on possible impacts of climate changes on the N yield from the catchments.
NVZ Status Quo in Europe
Agriculture (i.e., arable land, permanent crops, pastures, and mosaics) occupies nearly 42% (2012) of the EU territory.However, a visible trend of agricultural land loss through the abandonment or withdrawal from farming activities has been observed recently (-0.5% of the agricultural area, 2006-2012) [16].It provides indisputable benefits to society, although some farming activities cause substantial impacts on water bodies [6].Although an improvement of point-source management has been observed in recent decades in Europe [17], most studies point out farming practices as a major source of nutrients in the aquatic environment [18,19].Crop and livestock production, as well as fertilizer use, are considered as main accelerators of the nutrient cycle, as global surpluses of N continue to increase (+23% N) [20].
In general, the nutrient balance (defined as difference between the nutrient inputs entering and leaving farming systems) is being altered in many areas of Europe.In the period 2012-2014, all Member States, except Romania, had a surplus of N, with values exceeding 50 kg/ha in many countries [6].The nutrient agricultural discharges alter quality of aquatic resources and result in an increased eutrophication risk [21], and over-fertilization pressure exerts more awareness as climate change and deterioration of fresh water become more critical.According to the Fifth IPCC Assessment Report (AR5) conclusions, and the National Oceanic and Atmospheric Administration (NOAA) findings, climate change will affect (with a different probability) the global water cycle, causing changes in precipitation distribution, and its intensification over land areas, as well as the frequency of extreme weather events [22,23].These changes will likely affect the crop cycle and, therefore, its N uptake capacity.Moreover, results of modelling studies [24] show that, under specific climate change scenarios, an increasing trend of the nitrate leaching pattern is possible.
To tackle the issue of water pollution caused by nitrates from agricultural sources at a catchment level, nitrate vulnerable zones (NVZ) have been introduced under the ND throughout the EU.The NVZs are defined as those waters, surface and/or groundwater in which the nitrate levels exceed or are likely to exceed 50 mg/L from agricultural sources.The extent of the ND appliance depends upon the interpretation of the requirements by Member States.Particularly, the interpretation of "vulnerable", since this affects the range of the area subjected to mandatory requirements.In accordance with the provisions of the ND, Member States had two options to implement its stipulations.The first method relied on identification of the waters affected by nitrate pollution and waters which could be affected in case of a lack of any action pursued, and designating NVZs in their territories.The second method allowed the Member State to be exempted from the obligation of NVZs identification through implementation of action programmes throughout their national territory.Those countries opting for the second method were: Austria, Denmark, Finland, Germany, Ireland, Lithuania, Luxembourg, Malta, the Netherlands, Romania, Slovenia, the Region of Flanders, and Northern Ireland.
In the remaining countries the criteria for NVZs designation had to be developed, and most of them depended on the results of various computer simulations [25,26].The total area of NVZ's, including countries that apply for a whole-territory approach, represented approximately 61% of EU agricultural area in 2015.Therefore, there are still areas in Europe with potential water pollution that are not included in any action programme.Moreover, in some countries, the designed territories are limited to reduced areas putting in question the potential effectiveness of the action programmes [6].
NVZ Revision in Poland
The provisions of the ND have been officially introduced in Poland in 2004.Delimitation of the initial range of NVZs was performed by the Regional Water Management Authorities in each of the seven surface water districts and based on: (i) content of N compounds in surface and underground waters; (ii) eutrophication of surface waters (including internal and coastal sea waters); (iii) agricultural land use structure and soil typology; (iv) type of agricultural activity, and concentration of animal production; and (v) prevailing meteorological-, hydrological-, and hydrogeological conditions [27].Originally, in the first period of the ND being in force in Poland (2004Poland ( -2008)), 21 NVZs had been delimited, which accounted for 2% of the total area of the Polish territory.Moreover, the Action Programmes for Reduction of Outflow of Nitrogen from Agricultural Sources, and the Code of Good Agricultural Practices had been prepared and implemented [28].Standards included in these documents related primarily to the requirements of fertilizers and plant protection products management, water and soil conservation, rational use of wastewater and sewage sludge, conservation of valuable habitats and species found in agricultural areas, as well as maintaining cleanliness and order on farms [29].
Following the ND requirements the eutrophic state of the waters, action programmes, and extension of NVZs should be reviewed every four years.Hence, such actions have been performed in Poland in 2008, and 2012 based on the results of monitoring tests, expertise [30], as well as modelling results [31].In the second cycle (2008-2012) 19 NVZs were designated, covering approximately 1.5% of the country's area.Then, in the third cycle (2012-2016) 48 NVZs constituted 4.46% of the Polish territory.Process of the ND implementation in Poland has been followed and assessed by the European Commission (EC) [32], and insufficient surface areas designated as NVZs had been indicated.Due to the fact that Poland had not complied with the EC recommendations, a litigation process was started in 2013 (Case C 356/13).As Poland has failed to fulfil the ND obligations by inadequate identification and classification of nitrate vulnerable waters, and adopting incompatible measures in action programmes, a change of the approach to the ND has been imposed.This change resulted in switching into the second option of the ND provision implementation, i.e., an indication of the whole country's territory as a nitrate vulnerable zone, and development of a new approach to tackle the nitrogen pollution issue.
Consequences of the NVZ Approach Change
Apart from extending the NVZs range, the new approach to the implementation of the ND requirements will be based on the on the new Action Programme adopted in June 2018 [33].This programme specifies, among others: periods of fertilizer application, storage conditions for natural fertilizers, doses, and methods of nitrogen fertilization, and the way in which farmers are to document the implementation of these requirements.According to this document, mineral nitrogen fertilizers on arable land can be applied from 1 March to the end of October, other fertilizers (solid) from 1 March to 30 November, and liquid at such dates as nitrogen ones (Table 1).Selected fertilizers (mineral nitrogen and other liquid fertilizers) can be applied from 15 February on grounds that are not frozen, covered with snow or water, or saturated with water.Additionally, in accordance with this regulation liquid and solid manure fertilizers should be stored in a way that prevents leachates from entering the ground and water during the period when they are not used.Moreover, an increase of natural fertilizer tank capacity should be ensured, to enable their storage for a period of six months, not four as before.The document also indicates that farmers are obliged to keep records of nitrogen fertilization, and agree them with the regional agricultural stations.One of the most important elements of the ND is the control of doses and periods of fertilization on selected types of land.Due to the fact that Poland extends in the north-south dimension over 649 km, and 689 km in the east-west dimension, there are significant differences in the average annual temperature of air between the extreme regions of the country.This fact has a direct influence on plant vegetation periods, and subsequently on the periods of fertilizer application.Therefore, it was considered necessary to designate areas of the country where the dates of fertilizer application will be shortened or extended.This task performed by the Institute of Meteorology and Water Management (IMGW), resulted in delimitation of so-called "pole of cold" and "pole of heat" (Figure 1).The confines of the poles are based on the administrative borders of municipalities (communes), and the delimitation process was based on: Eventually, the "pole of cold" and "pole of heat" areas covered the range of 5.8% and 5.7% of the country's territory, respectively.The "pole of hot" covers three provinces (Opolskie, Dolnośląskie, and Lubuskie) and is a single area.The "field of cold", on the other hand, consists of three areas covering mountainous areas in the south of the country in the Dolnośląskie, Śląskie, Małopolskie, and Podkarpackie voivodships as well as the northeastern Polish area in the Warmian-Masurian and Podlasie voivodships.
According to the new regulations, the available time for fertilization in the "pole of cold" area has been shortened by a total of 20 days for solid organic fertilizers, and 16 days for solid nitrogen and nitrogenous mineral fertilizers.For the "pole of heat" area these periods have been extended by 15 and 30 days, respectively (Table 1).
Modeling of NVZ Revision' Impact-Variant Scenarios
To estimate the impact of introducing additional restrictions on fertilization periods in the "pole" areas, model calculations have been performed using the macromodel DNS/SWAT.Its construction and applicability have been described elsewhere [36].Briefly, the macromodel combines existing mathematical models and equations of hydrological transport in a catchment with the benefits of the SWAT module, which is generally used to model continuous long-term yields within hydrologic response units.The numerical model of a catchment created with the use of the macromodel DNS/SWAT enables to analyse different scenarios of the catchment exploitation in different meteorological and hydrologic conditions.This tool is also used to analyse the yield of nutrients at any selected control point of the river [11,37].
In the current study, the Słupia River catchment (Figure 1) has been selected as a pilot area for the modelling purposes.Although, this catchment is not located in any of the designated "pole" areas, covered by additional restrictions, its choice was promoted by many factors.Above all, the Słupia River discharges directly into the Baltic Sea.Moreover, this catchment has been used for total nitrogen (TN) field research by the IMGW since 2013.These results are beneficial in verification of the TN calculations with use of the macromodel DNS/SWAT.The aforementioned research conducted also at the neighbouring catchments confirmed that the Słupia River may be considered a representative catchment for the whole Pomeranian region.Therefore, it plays an important role in the estimation of TN loads from the territory of Poland into the Baltic Sea [38][39][40].The Słupia River has a length of 139 km, and its catchment covers an area of 1623 km 2 .The entire catchment is located Eventually, the "pole of cold" and "pole of heat" areas covered the range of 5.8% and 5.7% of the country's territory, respectively.The "pole of hot" covers three provinces (Opolskie, Dolnośl ąskie, and Lubuskie) and is a single area.The "field of cold", on the other hand, consists of three areas covering mountainous areas in the south of the country in the Dolnośl ąskie, Śl ąskie, Małopolskie, and Podkarpackie voivodships as well as the northeastern Polish area in the Warmian-Masurian and Podlasie voivodships.
According to the new regulations, the available time for fertilization in the "pole of cold" area has been shortened by a total of 20 days for solid organic fertilizers, and 16 days for solid nitrogen and nitrogenous mineral fertilizers.For the "pole of heat" area these periods have been extended by 15 and 30 days, respectively (Table 1).
Modeling of NVZ Revision' Impact-Variant Scenarios
To estimate the impact of introducing additional restrictions on fertilization periods in the "pole" areas, model calculations have been performed using the macromodel DNS/SWAT.Its construction and applicability have been described elsewhere [36].Briefly, the macromodel combines existing mathematical models and equations of hydrological transport in a catchment with the benefits of the SWAT module, which is generally used to model continuous long-term yields within hydrologic response units.The numerical model of a catchment created with the use of the macromodel DNS/SWAT enables to analyse different scenarios of the catchment exploitation in different meteorological and hydrologic conditions.This tool is also used to analyse the yield of nutrients at any selected control point of the river [11,37].
In the current study, the Słupia River catchment (Figure 1) has been selected as a pilot area for the modelling purposes.Although, this catchment is not located in any of the designated "pole" areas, covered by additional restrictions, its choice was promoted by many factors.Above all, the Słupia River discharges directly into the Baltic Sea.Moreover, this catchment has been used for total nitrogen (TN) field research by the IMGW since 2013.These results are beneficial in verification of the TN calculations with use of the macromodel DNS/SWAT.The aforementioned research conducted also at the neighbouring catchments confirmed that the Słupia River may be considered a representative catchment for the whole Pomeranian region.Therefore, it plays an important role in the estimation of TN loads from the territory of Poland into the Baltic Sea [38][39][40].The Słupia River has a length of 139 km, and its catchment covers an area of 1623 km 2 .The entire catchment is located in the northwestern part of the Pomeranian Voivodeship (Figure 1), and its area is dominated by agricultural use (almost 50% of the total area).Forest area covers 44% and occurs mainly in its central part.Buildings and anthropogenic areas constitute about 4% of the catchment area, with the main urban centres in Słupsk, Ustka, and Bytów.Floods in this catchment occur in the snow melting period (spring) and are inconsequential.However, the outlet area is frequently affected by sea water storm surges.As a main calculation profile for the macromodel DNS/SWAT calculation, the Charnowo profile was selected.This cross-section is located close to the mouth of the river (Figure 1), but at the same time far enough to be insusceptible to the impact of backwater from the sea.The Charnowo profile, a semi-automatic device (autosampler) has been also installed to collect water samples for the total nitrogen (TN) analyses.The macromodel DNS/SWAT generates datasets for the flow rate and loads of selected pollutants such as TN.Data is generated with a daily time step for any selected profile on the river.
The baseline scenario of the Słupia River model (VS0), also referred in the current study as a reference or control one, has been created with the goal of the best accurate representation of the real conditions in the catchment.To create VS0, the input data have been used, namely: Since the processes affecting the outflow of N and P loads from the catchment may be completely different, depending on the hydrological characteristics of a given year, the VS0 was based on an uninterrupted period of 15 years, including years with dry, average, and wet hydrological conditions.Then, the SV0 was calibrated and verified with use of the TN data from the IMGW monitoring station.
The use of the macromodel DNS/SWAT allows also for incorporation of the data on agrotechnical treatments, and so-called fertilization calendar.The fertilization calendar is based on data on the maximum permissible fertilizer dose that can be applied, taking into account the requirements of cultivated plants, soil types, and slope.The calendar divides fertilizers into mineral and organic and distributes them for specific months and days.All three prepared variant scenarios were based on modification of the fertilization calendar.For the scenarios representing the "field of cold" conditions, the fertilization periods were shortened, whereas for the scenario representing the "field of heat" conditions, they were extended.To assess the effects of the fertilization restrictions in the "pole of cold" and "pole of heat" areas in the Słupia River three variant scenarios (VS1-3) have been created:
•
VS1-"pole of cold" variant scenario 1-assumed shortening of the fertilization period by 20 days, and thus reducing of the amount of both mineral and organic fertilizers used during the allowed period; • VS2-"pole of cold" variant scenario 2-assumed shortening of the fertilization period by 20 days, and maintaining the VS0 amount of fertilizer applied in the catchment through increase of the fertilizer dose in the remaining allowed period; and • VS3-"pole of heat" variant scenario-assumed prolongation of the fertilization period by 30 days in total, while maintaining the VS0 amount of fertilizer applied in the catchment.
Results
All scenario simulations were performed on the monthly data for the period of 2002-2016.In the current study, to underline the seasonal differences, the yearly data and subsequent calculations were divided into two periods: summer (April-September), and winter (October-March), for which the monthly average TN load values for all the prepared scenarios have been calculated (Table 2).The differences in the TN loads between summer and winter months are clearly visible already for the VS0, with the average difference between the TN loads for the summer and winter months of 18,494 kg/month.Even larger differences between the average seasonal TN loads at the Charnowo calculation profile were detected during the variant scenario simulations, with the highest value for the VS3, reaching 42,248 kg/month (VS1-36,218 kg/month, and VS2-38,295 kg/month, respectively).For the VS0, and all three variant scenarios, the percentage of the TN load reduction (assuming the TN value for VS0 as 100%, Table 2), and also the resultant dispersion, have also been calculated.The dispersion was based on Equation ( 1 where R-data dispersion within the month; X max -the maximum value of the measurement during the month; X min -the minimum value of the measurement during the month; X-mean value of the measurement during the month.Simulation results showed that the reduction of the fertilization period by 20 days and, thus, reduction of the fertilizers amount by 55 kg/ha TN for each of the five crop types included in the fertilizer calendar, (VS1) could bring 8.61% of the TN load reduction (average for the period of 2002-2016) in the Słupia River catchment.Dispersion of the yearly results was very high for this scenario (775%), and clear differences were visible between the seasons, and among particular years (Table 2, Figure 2).The reduction of the TN load varied from 0.86% to 6.85% for the summer period, and from 9.87% to 19.16% for the winter period.The average TN load difference at the selected calculation profile between the baseline and "pole of cold 1" scenarios was 762 kg/month (coefficient of variation, cv = 60%) during the summer period, and 7171 kg/month (cv = 21%) during the winter period on average.As for the particular years, the lowest values of the VS0 and VS1 difference were observed in 2002-2003 and 2011, and the highest for 2006 and 2015 (for the summer and winter period, respectively).In general, in the all analysed years, the results of the variant VS1 scenario maintained a constant trend observed in the catchment, i.e., a high TN load values in winter periods (From October to March) and low TN load values in summer periods (from April to September, but the months in which the TN load was particularly low were July, August, and September).For the last two years (2014-2016) specified in Figure 2, the lowest TN load was slightly over 4700 kg/month (September 2014), while the highest TN load reached almost was 62,000 kg/month (January 2015).In practice, this means that the TN load values for the winter period are over 15-times higher than in the summer period.Figure 2 shows that in some earlier years the differences described were even greater.months in which the TN load was particularly low were July, August, and September).For the last two years (2014-2016) specified in Figure 2, the lowest TN load was slightly over 4700 kg/month (September 2014), while the highest TN load reached almost was 62,000 kg/month (January 2015) In practice, this means that the TN load values for the winter period are over 15-times higher than in the summer period.Figure 2 shows that in some earlier years the differences described were even greater.For the next "pole of cold" scenario (VS2), with shortening of the fertilization calendar, but without reduction of the fertilizer amount, the percentage of the TN load reduction was slightly lower compared to the VS1 scenario (5.96%).Dispersion for the VS2 results was at the similar level (737%), and the similar pattern was maintained for the seasonal changes.However, slightly lower ranges of the reduction for the summer (0.47-7.55%), and winter (5.94-12.05%)periods were detected (Table 2).The average TN loads both for summer and winter were also lower than those calculated for the VS1 scenario, by 0.2% and 3.59%, respectively.Comparing results of the VS2 with the baseline scenario the TN load average difference by 716 kg/month (cv = 72%) for the summer period, and 5049 kg/month (cv = 20%) for the winter period was concluded.As for the yearly pattern of the TN load distribution, the extreme values were detected in the same years as for the VS1 (2002-2003, and 2006) for the summer period.During the winter period the extreme values were observed in 2002 and 2012.Similarly, to the variant scenario VS1, also for VS2, the dependence of the TN load value on time remained the same (high values in the winter months and low in the summer months) (Figure 3).For For the next "pole of cold" scenario (VS2), with shortening of the fertilization calendar, but without reduction of the fertilizer amount, the percentage of the TN load reduction was slightly lower compared to the VS1 scenario (5.96%).Dispersion for the VS2 results was at the similar level (737%), and the similar pattern was maintained for the seasonal changes.However, slightly lower ranges of the reduction for the summer (0.47-7.55%), and winter (5.94-12.05%)periods were detected (Table 2).The average TN loads both for summer and winter were also lower than those calculated for the VS1 scenario, by 0.2% and 3.59%, respectively.Comparing results of the VS2 with the baseline scenario the TN load average difference by 716 kg/month (cv = 72%) for the summer period, and 5049 kg/month (cv = 20%) for the winter period was concluded.As for the yearly pattern of the TN load distribution, the extreme values were detected in the same years as for the VS1 (2002-2003, and 2006) for the summer period.During the winter period the extreme values were observed in 2002 and 2012.Similarly, to the variant scenario VS1, also for VS2, the dependence of the TN load value on time remained the same (high values in the winter months and low in the summer months) (Figure 3).For the years 20014-2016, the lowest TN load was 4,810 kg/month (September 2014), while the highest TN load was 64,666 kg/month (December 2015), which again highlights the 15-times difference between winter and summer periods.Extension of the fertilization period by 30 days (with maintaining the yearly fertilizer consumption) assumed in the "pole of heat" scenario (VS3) showed opposite trends.For the discussed pilot catchment such a change could result in a 43.69% increase of the TN load comparing to the baseline scenario.The reduction percentage for the VS3 compared to the VS0 scenario ranged between 67.77% and −145.09%(increase) for the summer period, and from 37.71% to −176.14% (increase) for the winter period (Table 2).Dispersion of the results was smaller than in the case of the Extension of the fertilization period by 30 days (with maintaining the yearly fertilizer consumption) assumed in the "pole of heat" scenario (VS3) showed opposite trends.For the discussed pilot catchment such a change could result in a 43.69% increase of the TN load comparing to the baseline scenario.The reduction percentage for the VS3 compared to the VS0 scenario ranged between 67.77% and −145.09%(increase) for the summer period, and from 37.71% to −176.14% (increase) for the winter period (Table 2).Dispersion of the results was smaller than in the case of the VS1 and VS2 scenarios and amounted to 677%, and dramatic changes could be observed for the particular years (Figure 4).For the summer period the extreme values were observed for 2002, and 2004, while the contrasting (lowest-highest) values were detected for 2007, and 2008.The average TN values showed an increase by 2220 kg/month (cv = 520%) for the summer period, and by 1841 kg/month (cv = 1742%) for the winter period.Additionally, in the case of the variant VS3 scenario, the trend has been preserved for most years.In the case of this scenario, however, there is a clear difference in the TN load values obtained for selected months compared to the VS0 scenario.For the 2014-2016 period highlighted in Figure 4, the lowest TN load was about 4800 kg/month (September 2015), while the highest value (~107,000 kg/month) reached in January 2014.In comparison to the VS1 and VS2 scenarios, the difference between TN loads for summer and winter in the period 2014-2016 has even increased, being 26-times higher during the winter period.
Discussion
Introduction of the new NVZ approach in Poland is supposed to bring vast changes for the environment, but also for communities living from agriculture, which create an important part of the Polish society.After adoption of the new legislation Poland will become one large area of NVZ, which will result in additional restrictions that will apply to farmers in all regions of the country.These restrictions are meant to bring environmental benefits, however, conflict between this large and influential social group and the legislator are expected.Fear of additional costs related to the required farm investments, and an extra effort to put into fertilizer use, reporting, and management, has been already observed through the press and social media activity focused on agribusiness issues.Even a more explicit social response is expected from the areas assigned to the "pole of cold", and "pole of heat".It should be noted that restrictions related to the fertilizer use calendar will be imposed on farms based on administrative borders of communes, without recognition of the actual range of corresponding crops.Areas defined as the "cold pole", especially in mountain areas (Figure 1), are already less competitive compared to the rest of the country (poorer soil quality), therefore, introducing new regulations may further aggravate the current situation of farmers.Moreover, one should expect particularly strong social resistance related to the introduction of different regulations than in other parts of the country.The "pole of heat" areas will likely experience fewer conflicts, as farms located in this area will benefit from an extended period of fertilization.However, the overuse of fertilizers, exceeding the available dose, could be expected through the application time extension.
Discussion
Introduction of the new NVZ approach in Poland is supposed to bring vast changes for the environment, but also for communities living from agriculture, which create an important part of the Polish society.After adoption of the new legislation Poland will become one large area of NVZ, which will result in additional restrictions that will apply to farmers in all regions of the country.These restrictions are meant to bring environmental benefits, however, conflict between this large and influential social group and the legislator are expected.Fear of additional costs related to the required farm investments, and an extra effort to put into fertilizer use, reporting, and management, has been already observed through the press and social media activity focused on agribusiness issues.Even a more explicit social response is expected from the areas assigned to the "pole of cold", and "pole of heat".It should be noted that restrictions related to the fertilizer use calendar will be imposed on farms based on administrative borders of communes, without recognition of the actual range of corresponding crops.Areas defined as the "cold pole", especially in mountain areas (Figure 1), are already less competitive compared to the rest of the country (poorer soil quality), therefore, introducing new regulations may further aggravate the current situation of farmers.Moreover, one should expect particularly strong social resistance related to the introduction of different regulations than in other parts of the country.The "pole of heat" areas will likely experience fewer conflicts, as farms located in this area will benefit from an extended period of fertilization.However, the overuse of fertilizers, exceeding the available dose, could be expected through the application time extension.
Despite, the possible social conflicts, the results obtained from the macromodel DNS/SWAT clearly show that restrictions imposed on farmers in the "pole of cold" areas will bring sound ecological effects.The simulations for the variant scenarios including conditions to be met in these regions show a noticeable reduction of the yearly TN load (8.61%, and 5.96% for the VS1, and VS2, respectively) in the pilot catchment.Since, the requirement of the fertilizer calendar shortening does not necessarily have to guarantee reduction of the total dose of the chemicals used in the catchments, then the second "pole of cold" scenario (VS2) should be considered as more probable.In the case of the pilot catchment, this scenario would result in an average decrease of the TN load by ca.3000 kg/month.On the contrary, calculations for the "pole of heat" scenario (VS3) resulted in a distinct increase of the yearly TN load (43.69%) due to the extension of the fertilizer use period.Such an increase has to be considered as significant, especially taking into consideration that the size of the pilot catchment does not exceed 1620 km 2 .For the catchment of the Warta River, located in the central part of Poland, with a size of 54,529 km 2 and where intensive agricultural activity is conducted [36] (with ca.12% of the total area belonging to the "pole of heat" area), such a level of increase would likely result in hundreds of thousands of TN (kg) surplus per month.However, it should be remembered that the studied Słupia River catchment does not belong to the pole areas, therefore, only general restrictions imposed by the NVZ's new regulations will be introduced.Even in that case, the average TN load discharged into the Baltic Sea by this river (ca.43,000 kg/month) should be reduced.Additionally, taking into consideration the presence of eight other small rivers in the Pomerania region with comparable catchments would bring a highly expected reduction of TN load discharged from Poland, from the point of view of pollution to the Baltic Sea.At this stage it is difficult to say whether the remaining catchments in this region will be comparable to the implementation of the new regulations, but it is nevertheless correct to assume that the TN load will be reduced throughout the area.
The justification of the obtained ecological results should be, however, discussed taking into consideration seasonal and yearly pattern of TN load changes.The significant differences between the winter and summer periods in the TN loads have been observed for the modelling results (VS0-3) at the study area.The summer TN loads in the Słupia River at the calculation profile of Charnowo were noticeably smaller (by average ca.64% for all discussed scenarios) then the winter loads.This phenomenon, called a flattening phenomenon, has been observed in different catchments in Poland.It consists in periodic reduction of nutrient compounds released to surface waters, due to plant cover influence on retention of water and nutrients [41].Although, in the case of the performed simulations, this phenomenon was additionally altered through the changes in the fertilization period.Both circumstances contributed to the high values of result dispersion.The obtained values were at the level of 737-775% for the "pole of cold" scenarios, and slightly lower (677%) for the "pole of heat" calculations.With the latter value related most likely to the extension of the fertilization period in the VS3 scenario.These results confirm natural variability of the TN loads depending on the season of the year, which is characteristic for the majority of the catchments at this latitude.For example, the dispersion for the TN load for the already mentioned Warta River was at the similar level (ca.674%).
As for the yearly pattern of differences, a large variability of the average TN loads has been observed even for the reference scenario (VS0).For this variant, the extreme TN load values were recorded in 2003-2005, and 2014 (Table 2).To investigate this pattern, 1-min data from the Ustka meteorological station, located directly at the Słupia River catchment, were used.These data were calculated at the IMGW-PIB for the needs of the Polish Atlas of Rainfall Intensity (PANDa) currently being prepared by the RETENCJAPL [42].This information allowed identifying sudden short-term precipitation events and a frequency of their occurrence on particular days and months of the year.Detailed information was retrieved for the years where the extremely high or low values of the TN load reduction were observed in each season.Thus, the years of 2003 and 2005 were selected as charged with extreme values during the summer season (9850, and 38,604 kg/month, respectively, for the both "pole of cold" scenarios).While, 2014 and 2004 were bearing the extreme values for the winter period of VS1 and VS2 (24,486, and 98,720 kg/month, respectively).For these years the seasonal precipitation was examined at the Ustka station.The average annual rainfall for this station is quite high and fluctuates from 497 mm during the winter period to over 873 mm in the summer one, with the highest sums recorded from July to October (over 60 mm), and the smallest (below 40 mm) in January, and April.For the Ustka station, both 2003 and 2005 were classified as dry years.In 2003, heavy rain in the summer season was low, while the summer period of 2005 was extremely dry.On the other hand, from October to December numerous short-lived but intense rainfalls were noted.The years selected as extreme from the point of view of the TN load were considered as wet.The recorded rainfall data for 2004 clearly justifies the maximum TN load in the winter for this year.Heavy rainfalls had been occurring since September of that year.Conversely, the situation looked like in 2014 where, after intensive summer rainfall, there was a dry autumn and winter explaining the low TN load values for both VS1 and VS2.The similar analysis performed for the "pole of heat" scenario VS3 indicated years 2004 and 2002 as bearing the minimal and maximal TN loads (10,239 and 42,176 kg/month, respectively).The year of 2004, as already mentioned, was characterized by exceptionally low rainfall values during summer, with only precipitation that could have a significant impact on the surface runoff volume observed from September of this specific year.In turn, 2002 was full of numerous short-lived, but intense, precipitation during the summer months.For the winter period, the values of the minimal and maximal TN loads were observed in 2015 and 2008 (29,844 and 103,619 kg/month, respectively).Unfortunately, the data for 2015 is difficult to analyse, since they are very limited and do not allow establishing the relationship between the precipitation and TN load.As for 2008, numerous intense rainfalls from August to December were detected.There is no doubt that atmospheric precipitation, its frequency and intensity, are of great importance for the release of nutrients into surface waters.Throughout Poland signs of climate change and its impact on surface waters have been observed, and is increasing year by year.Increased frequency of extreme events, such as intense rainfalls, in turn, will lead to an increase in the amount of nutrients entering surface waters from areas used for agriculture [43] and clearly should be taken into consideration while estimating the impact of the NVZ regulations.
The results obtained, although they should be treated as preliminary, indicate the positive environmental aspects of the implementation of the new NVZ action program.The introduction of additional restrictions and requirements to limit the release of nitrogen from agricultural sources to surface waters and further into the Baltic Sea will undoubtedly reduce excessive amounts of nutrients in the aquatic environment and dependent water by improving the functioning of ecosystems in the long term.While there is no doubt that any initiative aimed at improving the quality of the environment is extremely important, it is also necessary to remember about the social aspect which, if not well thought out already at the planning stage, can effectively hamper and sometimes even prevent changes.Already at the stage of strongly truncated social consultations and reactions of institutions representing the interests of farmers, it could be concluded that the new regulations will generate a very large number of conflicts between the legislator and farmers.
Conclusions
The current study made the first attempt to analyse an impact of the new action program aimed to reduce nitrogen pollution caused by agricultural sources in Poland.Implementation of the general requirements imposed through this program on the whole country will, unfailingly, bring benefits to the environment.However, their financial and organizational costs will likely meet discontentment from farmers, which is inevitable when one has to choose between the competitiveness of farms and the improvement of the state of the environment.In addition to these changes, more specific restrictions will be imposed on the territories designated for reduction or extension of the period when the use of nitrogen fertilizers is allowed.
To incorporate details of the stipulated changes in the fertilizer use calendar, the modelling approach was adopted.With the use of the macromodel DNS/SWAT the baseline model, and three variant scenarios, were created for the pilot catchment (Słupia River).The obtained results have shown that introduction of more rigorous restrictions on nitrogen fertilizer use would have a considerable impact on TN load reduction in the areas subjected to such changes ("pole of cold").Therefore, the total load of nitrogen discharged from the Polish territory could be also reduced.However, the extension of the fertilizer use period will likely result in an increase of total nitrogen load released from the catchments located at the "pole of heat" region.The analyses described in the article have also confirmed the strong relationship of atmospheric precipitation with the amount of nutrients in surface waters.Climate change is becoming more and more visible throughout Europe, which will, inter alia, cause intensification of violent meteorological phenomena such as rapid rainfall.As a result, more and more nitrogen could be released from cultivated fields, through run-off to surface waters, causing algal blooms in water bodies, including the Baltic Sea.Therefore, new programs of measures limiting the use of nutrients are necessary.
The general questions, i.e., will the described changes in legislation help to improve the general quality of surface waters, and/or will the costs incurred as a result of these changes and conflicts between the legislator and the farmers be consequently considered to be profitable, are still premature and need to be answered.However, it must not be forgotten that any introduction of new legislations, especially those that cover the whole area of the country, require long and well-prepared social consultations and information programs, as well as transitional periods.Only the combination of these three elements could limit the aforementioned conflicts.At the same time, it must never be forgotten that farmers will face the main burden of these provisional implementations, and the role of the state, in addition to caring for the environment, should also be concerned for the competitiveness and good conditions of farms.
Figure 1 .
Figure 1.(a) Areas delimited as "pole of cold" and "pole of heat"; (b) modelling area-the Słupia River catchment.
Figure 1 .
Figure 1.(a) Areas delimited as "pole of cold" and "pole of heat"; (b) modelling area-the Słupia River catchment.
Figure 2 .
Figure 2. Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of cold variant scenario 1 (VS1) at the Charnowo profile.
Figure 2 .
Figure 2. Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of cold variant scenario 1 (VS1) at the Charnowo profile.
Sustainability 2018 ,
10, x FOR PEER REVIEW 11 of 17the years 20014-2016, the lowest TN load was 4,810 kg/month (September 2014), while the highest TN load was 64,666 kg/month (December 2015), which again highlights the 15-times difference between winter and summer periods.
Figure 3 .
Figure 3.Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of cold variant scenario 2 (VS2) at the Charnowo profile.
Figure 3 .
Figure 3.Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of cold variant scenario 2 (VS2) at the Charnowo profile.
17 Figure 4 .
Figure 4. Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of heat variant scenario (VS3) at the Charnowo profile.
Figure 4 .
Figure 4. Comparison of the TN loads (kg N/month) for the baseline scenario (VS0) and pole of heat variant scenario (VS3) at the Charnowo profile.
Table 1 .
Comparison of the fertilization periods in the pole areas and the rest of the country.
•
Data from the 1750 stations belonging to the State Hydrological and Meteorological Service (PSHM) carried out by the IMGW, used for preparation of the yearly average temperature distribution maps for the period of 1981-2014-to select commune with the lowest and highest average temperatures; • Data from the Institute of Soils Science and Plant Cultivation (IUNG) describing estimated length of the growing seasons in the period of 2011-2020 [34]-to select commune with the shortest and longest growing seasons; and [35]ta from the Plan of the Rural Areas Development[35]delimiting areas with unsuitable agricultural conditions (in the mountain areas).
Table 2 .
Monthly average values of TN loads in the Charnowo profile for the summer and winter periods for the baseline and variant scenarios. | 2019-05-20T13:05:57.697Z | 2018-09-14T00:00:00.000 | {
"year": 2018,
"sha1": "cdc8ba266f0ea86c5ce77797884c06292406f3ff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/9/3297/pdf?version=1537231912",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cdc8ba266f0ea86c5ce77797884c06292406f3ff",
"s2fieldsofstudy": [
"Environmental Science",
"Law"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
204155067 | pes2o/s2orc | v3-fos-license | Antagonistic, synergistic and direct effects of land use and climate on Prairie wetland ecosystems: Ghosts of the past or present?
Wetland loss and degradation threaten biodiversity to an extent greater than most ecosystems. Science‐supported responses require understanding of interacting effects of land use and climate change on wetland biodiversity.
| INTRODUC TI ON
Wetland ecosystems require immediate and sustained conservation attention as they are experiencing declines in biodiversity greater than those in the most affected terrestrial ecosystems (MEA, 2005).
Concern has been mounting regarding how stressed freshwater systems will cope with rapid, ongoing global changes (Reid et al., 2018;Reis et al., 2017;Vörösmarty et al., 2010). Despite this, there remains a lack of integration of land use change and climate change in studies of species distributions and abundances (Sirami et al., 2017), especially in freshwater systems (Piggott, Townsend, & Matthaei, 2015;Taniwaki, Piggott, Ferraz, & Matthaei, 2017). Understanding the interacting effects of climate change and land use change is necessary to inform climate change adaptation and mitigation measures.
Whether stressors act additively (i.e., the combined effect is the sum of their individual effects), synergistically (i.e., the combined effect is larger than the sum of their individual effects) or antagonistically (i.e., the combined effect is smaller than the sum of their individual effects) will have a critical bearing on outcomes for biodiversity and conservation decisions (Cochrane & Laurance, 2008;Oliver & Morecroft, 2014). If climate change and land use change have synergistic negative effects on biodiversity, then it is vital to anticipate these non-additive effects (Cochrane & Laurance, 2008;Zak, Cabido, Cáceres, & Díaz, 2008). If the interaction is antagonistic (e.g., increased precipitation buffers some land use change effects), then we may be able to allocate limited conservation resources more effectively (Didham, Tylianakis, Gemmell, Rand, & Ewers, 2007;Oliver & Morecroft, 2014). However, until we can identify and understand these interactions, our ability to recommend interventions with high likelihood of success in achieving broad biodiversity conservation goals is limited.
Multiple stressors in freshwater ecosystems have resulted in population declines and range reductions of freshwater species worldwide (Heino, Virkkala, & Toivonen, 2009;Reid et al., 2018). Yet, our understanding of the combined and interacting effects of climate change and land use change (e.g., habitat loss, nutrient enrichment) on wetland biodiversity is limited (Anteau, 2012;Porter et al., 2013;Schindler, 2001) and the implications of these interactions for wetland ecosystems have generally been based on broad assumptions rather than empirical data (Rashford et al., 2015;Schindler, 2001).
For example, it is expected that climate change will interact with ecosystem conversion and degradation to alter turbidity and eutrophication of aquatic ecosystems (Häder, Kumar, Smith, & Worrest, 2007;Schindler, 2001) and may be exacerbated by vegetation loss (Didham et al., 2007;Oliver & Morecroft, 2014). However, not all plant and animal species will be negatively affected; some will adapt and possibly benefit from changes (Davis, Lake, & Thompson, 2010), while other species are likely to suffer catastrophic declines (Didham et al., 2007;Oliver & Morecroft, 2014). Thus, to better adapt to climate change, we must improve our understanding of the processes generating climate and land use change interactions and assess the consequences of these interactions on wetland biodiversity. This will improve our ability to incorporate climate change predictions and interactions with land use change into the design of conservation strategies, which currently represents a major deficiency in wetland ecosystem management and policy (Abell, 2002;Munang et al., 2010).
Here, we examine whether interactions between current climate, climate change and agricultural land use drive patterns in avian and aquatic macroinvertebrate communities to gain insights about how climate change has affected biodiversity of prairie ecosystems. We differentiate between current climate and climate change because a warm and/or wetter year/period may benefit or disadvantage wetland biodiversity in the short term, for example, by improving foraging/growing conditions or starving young (Crick, 2004). However, greater rates of change in temperature and precipitation over time may act as a reoccurring "ghost" forcing species to adapt or perish depending on evolutionary processes and whether other species/ stressor interactions are present (Brooker, Travis, Clark, & Dytham, 2007;Hoffmann & Sgrò, 2011).
We evaluate which guilds and functional feeding groups of birds and macroinvertebrates are influenced by climate and land use interactions by analysing a large spatially representative data set (617 sites across 156,318 km 2 ) from south-central Alberta, Canada, within the North American Great Plains. The region has lost 60%-70% of its original wetlands and >70% of its native grasslands due to agricultural development (ABMI, 2015), and land conversion pressures continue. Climate change is causing profound shifts in the seasonal availability and distribution of water and aquatic vegetation (Johnson et al., 2005;Shook & Pomeroy, 2012). Climate change may dramatically affect the phenology (annual recurrence of phenomena) of vegetation, seed production and insect emergence (Skagen et al., 2011), the intensity of agriculture and the physiological suitability of the region for cold-limited plants and animals (Bellard, Bertelsmeier, Leadley, Thuiller, & Courchamp, 2012). These stressors to wetland systems in south-central Alberta are similar to those faced across the Great Plains and throughout the world. group of conservation concern. Riparian vegetation ameliorated the negative impacts of climate and water quality gradients on MTR and could mitigate global change impacts in agricultural systems.
K E Y W O R D S
agriculture, antagonistic, aquatic macroinvertebrates, birds, climate change, functional group, interaction, synergistic, water quality, wetlands Thus, our broad goal was to clarify the impacts of climate variability and change on wetland-associated biota, factors that have received much less attention than land use or habitat-specific drivers. We predicted that (a) the abundance and richness of birds and aquatic macroinvertebrates would be positively related to area of natural, perennial upland cover (i.e., low cropland area) and high wetland abundance in sites with the highest precipitation and lowest temperatures because upland and wetland sites with lower precipitation and higher temperatures are more vulnerable to drought and eutrophication. We also predicted that (b) functional groups with less specialized diets, foraging habitats and greater adaptive capacity would be less influenced by climate and land use change (Brooker et al., 2007;Hoffmann & Sgrò, 2011). Aquatic macroinvertebrates, particularly midges and dragonfly and damselfly larvae would be most impacted by land use intensity and warmer/drier conditions due to their sensitivity to water quality (Hornung & Rice, 2003;McCormick, Shuford, & Rawlik, 2004). We also predicted that (c) sites with higher temperatures or that experienced larger long-term temperature increases would have lower overall bird species richness and macroinvertebrate taxa richness due to effects on physiological development (Cox, Thompson, Reidy, & Faaborg, 2013;Piggott et al., 2015) and (d) sites with higher precipitation or that experienced larger long-term precipitation increases would have higher species richness due to reduced predator activity or higher resource availability (Cox et al., 2013). However, excessive precipitation or long-term increases in precipitation could reduce invertebrate richness by increasing nutrient levels and the incidence of eutrophic water bodies as a result of higher agricultural run-off (McCormick et al., 2004). Furthermore, we predicted that (e) survey year and long-term temperature and precipitation effects on both avian and macroinvertebrate communities would be modest compared with the negative effects of low grassland cover and wetland abundance at sites with intensive agriculture (LeBrun, Thogmartin, Thompson, Dijak, & Millspaugh, 2016;Scrimgeour & Kendall, 2003;Stanton, Morrissey, & Clark, 2018).
| Study region
South-central Alberta lies at the northern extent of the North American Great Plains, a region (156,318 km 2 ) characterized by thousands of glacially formed wetlands in a landscape matrix of natural grassland and agriculture ( Figure 1). It is renowned for biological diversity (ABMI, 2015), but is one of the most productive agricultural regions in the world (Campbell, Zentner, Gameda, Blomert, & Wall, 2002). The region is vulnerable to severe droughts due to low precipitation and high evapotranspiration in summer (Schindler & Donahue, 2006). The study region intersects the mixed grasslands and parkland ecoregions. The grassland ecoregions are typified by rolling terrain with dark-brown topsoil, subhumid to semi-arid moisture conditions, and a mix of native and tame grasses and shrubs in non-cropland areas. Parkland is classified by groves of aspen and patches of shrublands within a grass and cropland matrix; soils are typically darker and the ecozone has a slightly cooler climate (Alberta Parks, 2015).
The Alberta Biodiversity Monitoring Institute (ABMI) has measured biodiversity, habitat and human footprint throughout Alberta (latitude: 49°-60°, longitude 110°-120°) since 2007. The ABMI database was chosen because it is one of the largest and longest-running systematic upland and wetland-specific monitoring programs in the North American prairies and was explicitly designed to allow assessments of biodiversity responses to environmental conditions over and passerine bird species (adults), and both insect taxa in particular serve as indicators of ecosystem quality (Hornung & Rice, 2003;McCormick et al., 2004). Macroinvertebrate taxa richness was calculated as the number of unique taxonomic ID numbers measured at the species, genus or family level to retain the diversity of macroinvertebrate lifecycles (ABMI, pers. comm.). At each ABMI upland site (n = 337; Figure 1), breeding bird species presence and abundance were determined using a standard 10-min point-count survey with audio recording units. Birds were classified into a species richness index and 21 functional groups according to Sundstrom, Allen, and Barichievy (2012), Poole (2005) and co-author (RGC, EB) expertise on specific dietary and foraging strategies during the breeding season (see Appendix S1).
| Biodiversity, water chemistry and riparian habitat data
Water chemistry was measured at the deepest point of the wetland (dissolved oxygen and dissolved organic carbon [mg/L], specific conductance [mScm −1 ], salinity (ppt) and total nitrogen and phosphorous [μg/L]). Riparian habitat amount (riparian width = total width in metres for the emergent, fen and margin zones combined, and the per cent cover of forbs, shrubs, grasses, sedges, rushes, and deciduous and coniferous trees) was recorded. The amount of 'nonwoody vegetation' was then calculated as the total per cent cover of forbs, grasses, sedges and rushes averaged across all riparian quadrants (north, east, south, and west), whereas 'woody vegetation' was calculated as the total per cent cover of shrubs and all trees.
Both variables are commonly used for incorporating different landscape attributes in grassland conservation studies (Cunningham & Johnson, 2006).
| Land cover data
We calculated buffers for aquatic macroinvertebrates and birds separately because wetland survey locations were offset (247 m to 13.47 km) from upland survey points for birds. For indicators of land cover, we extracted 30-56 m resolution buffers (100 m radius for aquatic macroinvertebrates and 500 m radius for birds) around each survey point corresponding to the ABMI survey year (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) using annual land cover data layers (2009(AAFC, 2009). For the two survey years preceding the annual crop inventory (2007)(2008), we used the 2009 layer and assumed that the total cropland and land management/tillage system was representative. An additional tree cover indicator was calculated as the total % cover of mixed forest and broadleaf forest within each buffer. Coniferous forest was excluded because it was negligible in the study area. The two buffer sizes were chosen to characterize the surrounding landscape effects on aquatic macroinvertebrates within wetlands (100 m), and the habitat conditions where birds were detected (500 m) corresponding with field survey design (nine point-count stations in a grid pattern with 300 m between stations; ABMI, 2014).
| Climate data
We used the Canadian gridded estimates (50 km resolution) from the (AAFC, 2009) and the distribution of the Alberta Biodiversity Monitoring Institute survey sites (filled symbols) used in this study (n = 337 for birds and n = 280 for aquatic macroinvertebrates). Water represents lakes, rivers and wetlands de6-b20a-46cc-8990-01862 ae15c5f, accessed September 2017).
Two long-term climate indices (mean maximum temperature and rainfall variability from the 40 years preceding the survey year), two climate change indices (precipitation change and temperature change between the 1901-1940 and 1976-2015 periods) representing the ghost of the past, and four recent weather and climate indices (spring temperature, spring precipitation, total fall/winter/spring precipitation and mean annual precipitation over the last 15 years) were calculated using values corresponding to the grid closest to each site (i.e., nearest neighbour) and survey year (see Appendices S2, S3 for details). All GIS processing was undertaken using ArcGIS 10.5 and R 3.4.0 (R Core Team, 2018).
| Statistical analyses
To test whether land use, water chemistry, current weather and climate, and climate change exert additive or interacting effects on biota, and whether specific functional groups of macroinvertebrates and birds are more vulnerable or resilient to land use, climate and climate change effects, we used mixed-effects linear regression models (Zuur, Ieno, Walker, Anatoly, & Smith, 2009) groups with more specialized diets and/or foraging habitats, will be the same across sites regardless of precipitation or temperature. Conversely, temperature and precipitation effects on invertebrates and birds will be the same across sites regardless if nearby cropland and wetland area varies. If antagonistic interactive relationships exist, we predicted weaker negative effects of cropland area and weaker positive effects of wetland abundance on birds and invertebrates at sites with the lowest precipitation and highest temperatures, possibly due to negative effects of increased salinity and other aqueous chemicals in precipitation-related runoff (Hornung & Rice, 2003;McCormick et al., 2004). If synergistic interactive relationships exist, we predicted stronger negative effects of cropland area and stronger positive effects of wetland abundance on birds and invertebrates at sites with the lowest precipitation and highest temperatures (Cox et al., 2013;Piggott et al., 2015), because upland and wetland sites with lower precipitation and higher temperatures are more vulnerable to drought and eutrophication.
Prior to analysis, Pearson's correlation coefficient was used to test for correlations among predictor and response variables. No response variable was correlated with latitude or longitude (r < 0.5, p < .05). Breeding bird abundance was removed to reduce the number of analyses. We removed rainfall variability, mean maximum temperature, mean annual precipitation and specific conductance to reduce effects of collinearity (Graham, 2003), but climate variables were substituted and analyses rerun to confirm the importance of variables retained in the final models. We also retained precipitation change and temperature change because we predict that greater rates of warming/rainfall will be linked to individual functional groups, and we tested whether total % cropland was a better predictor than grassland. The remaining weather, climate, land use, water quality and riparian habitat variables (Table 1) were standardized ([x − mean]/SD) for effect size comparisons. Wetland sites were used for predictors in the macroinvertebrate surveys, but upland sites were used for predictors in the bird surveys.
Macroinvertebrate taxa richness and bird species richness were log-transformed and fit with a Gaussian error distribution (lme4 package, Bates, Maechler, Bolker, & Walker, 2014), while the number of chironomids, odonates and each of the 16 bird functional groups were fit with negative binomial error distributions (glmmADMB package) because of their skewed distributions (Fournier et al., 2012). Three bird functional groups (bark invertivore, bark omnivore and terrestrial pollinator) were too rare to model, and aerial carnivores were removed because their home-range size is greater than our 500 m radius land use buffer sizes (Leary, Mazaika, & Bechard, 1998).
Models were ranked based on AIC c (Burnham & Anderson, 2002). We began with a small number of predictors as fixed effects (cropland, pasture and forages, spring temperature, temperature change and precipitation change) and potential random effects (survey year from 2007-2015, latitude [to account for a natural species richness-latitude patterns], latgroup [latitude grouped into six classes], longitude). We tested the relative fit of different random effects (intercept and slope) of year, latitude, latgroup and longitude, to determine which predictors to include in all subsequent linear mixed-effects models (Beale, Kendall, & Mann, 1967). Ultimately, we used only a random intercept effect of year with the five initial fixed effects predictors in all subsequent models, because only one random intercept for year was consistently significant (p < .05) across response variables (Table 1) to account for annual effects on biodiversity that were not explained by environmental covariates.
We treated the initial model with five fixed effects and a random intercept for year as the initial null model, then sequentially added the remaining predictors, including interaction terms, one by one, and compared the AIC c values of the models with and without the additional predictor. We fit models with interaction terms representing relationships depicted in Figure 2. The effects of multiple drivers were considered interactive if models including interaction terms had lower AIC c values than their additive versions. To account for the probability that a given estimate came from the best model, parameter estimates were model-averaged from models that were within 2 AIC c units (ΔAIC c ≤ 2). We calculated R 2 statistics following Jaeger, Edwards, Das, and Sen (2017) as a measure of absolute model fit.
Some sites within the database were monitored twice (5-year rotation; n = 48 for birds and 41 for aquatic macroinvertebrates), but were considered independent due to the 4+ year interval relative to changing weather and land use rotations in this system. To confirm this, we re-analysed best approximating models by including random effects of year and siteID and obtained virtually identical results to a model with random effects of year alone.
| Model selection
We looked at the relative ability of additive and interactive models to explain the data distribution of each dependent variable, with (a) better models having lower AIC c statistics and (b) models with AIC c within 2 AIC c of the lowest AIC c model having equivalent explanatory ability.
Model selection yielded one to five plausible models within 2 AIC c of the best approximating model for each biodiversity response (Appendix S4). In all cases, except aquatic and terrestrial invertivores (where ΔAIC c was 5.5), the ΔAIC c of the null (intercept-only) model was >20.0. All of the top-ranked models explained a relatively similar amount of the total variance in the data (R 2 mean = 0.2, range = 0.04-0.45), but the model for terrestrial insectivores was highest, explaining 45%. Ten of twenty models with lowest AIC c included an interaction term between two of the explanatory variables, suggesting that land use, weather and climate change exert interacting and additive influences on avian and aquatic invertebrate communities.
| The influence of land use and climatic factors on aquatic macroinvertebrates
We tested for positive effects of grassland and woody plant cover, wetland abundance and precipitation, and negative effects of increased cropland cover, temperature, salinity, TP and/or DOC on different aquatic invertebrates. We found that numbers of chironomids and odonates, and overall macroinvertebrate taxa richness, were
Odonates responded most negatively to TP. Abundances of chironomids were positively related to amount of cropland surrounding the wetland basins.
We tested whether negative effects of aqueous chemistry and climate variables were reduced (antagonistic interactions) or intensified (synergistic interactions) by increases in specific land cover types around aquatic survey sites. We found that an antagonistic relationship occurred between non-woody vegetation and salinity on macroinvertebrate taxa richness; a relatively high (>75% = 50th data percentile) % cover of non-woody riparian vegetation reduced the negative relationship of salinity (Figure 4a). A weak synergistic relationship between cropland and total precipitation on chironomids was detected; under higher total precipitation, sites with more cropland cover (>55% = 80th data percentile) were associated with fewer chironomids than sites with less cropland cover, but the trend reversed under dryer conditions (chironomids were more abundant at sites with higher cropland cover (>45% = 75th data percentile); Figure 4b). Another weak synergistic relationship was detected between total precipitation and DOC on chironomids; lower precipitation (188 mm = 20th data percentile) and high DOC were associated with lower chironomid abundance, whereas low precipitation and low DOC were associated with higher abundance ( Figure 4c). Finally, a synergistic relationship was found between grassland and TP on odonates; more grassland cover (50% = 75th data percentile) increased the negative impact of higher TP on odonate abundance (Figure 4d), but when TP was lowest and high grassland cover was present, odonates were most abundant. All other interactions and coefficients were small in magnitude and/or had large error estimates.
| The influence of land use and climatic factors on birds
We tested for positive effects of grassland and woody plant cover, wetland abundance and precipitation, and negative effects of increased cropland cover and temperature on bird richness and abundance of different functional groups. There were many similarities in the land use and climatic effects among bird functional groups and overall bird species richness (Figure 3). Aquatic invertivores, aquatic omnivores and terrestrial invertivores displayed larger negative coefficients for spring temperature or total precipitation.
We tested whether negative effects of temperature and positive effects of precipitation were reduced (antagonistic interac-
tions) or intensified (synergistic interactions) by increases in specific
land cover types around bird survey sites, for instance, to unveil whether negative effects of cropland could be offset by extent of natural habitat or warmer, wetter climatic conditions. A negative interaction effect (cf. synergistic relationship) was found between temperature change and cropland on aquatic and terrestrial insectivores, these being less abundant at sites where temperature has increased the most over time (1.32°C = 80th data percentile) and where the % cover of cropland was high (100%); at sites where % cropland was low (0%) and temperature has increased the most over time (1.32°C), abundances of aquatic and terrestrial insectivores were highest (Figure 4e). We also found an interaction (cf. antagonistic relationship) between cropland and temperature change on terrestrial insectivores ( Figure 4f); in areas where temperature has increased the most over time, the negative relationship of F I G U R E 2 A conceptual model of the direct and a priori interaction effects between land use, current weather and climate change that were tested. All indirect links (i.e., dashed lines) are bidirectional interactions in terms of their effects on invertebrates and birds, but with only one arrow shown for each link. For a description of each type of variable (e.g., current weather, climate change), see Table 1 and main text. The direct and interacting effects of riparian habitat and water chemistry were only tested with aquatic macroinvertebrates in this study because this wetland information could not be directly related to the breeding bird data (>200 m between wetlands and bird survey sites) cropland on terrestrial insectivores was less in comparison with sites where temperature has increased the least. We identified an antagonistic relationship between shrubland and temperature change on aerial insectivores ( Figure 4g); in areas where shrubland was low, higher temperature change ameliorated the negative relationship of less shrubland on aerial insectivores. This antagonistic interaction between shrubland and temperature change was also evident for terrestrial herbivores, arboreal herbivores and arboreal insectivores, but at sites with relatively high shrubland cover (>50%), the abundances of these functional groups were highest where temperature has increased the least (<0.97°C = 20th data percentile; Appendix S5). We identified two opposing interactions between trees and temperature change on terrestrial omnivores and aquatic omnivores: higher temperature change antagonistically reduced the negative relationship of trees on terrestrial omnivore abundance (Figure 4h), whereas higher temperature change synergistically increased the negative impact of higher tree cover on the abundance of aquatic omnivores (Appendix S5). Similarly, higher total precipitation also synergistically increased the negative relationship of higher tree cover on the abundance of terrestrial invertivores (Appendix S5). Finally, a synergistic relationship between shrubland and precipitation change was detected on arboreal omnivores that was not shared with any other functional group (Appendix S5). All other interactions and coefficients were small or error estimates were large.
| D ISCUSS I ON
Climatic and land use variables are related to the responses of avian populations and aquatic macroinvertebrates, but most responses were taxon-specific. Most relationships were direct and several were strong, including some antagonistic and synergistic interactions.
| Aquatic macroinvertebrates
Macroinvertebrate abundance and taxa richness appear largely driven by water quality, specifically salinity levels, TP and DOC, with some modulation by the surrounding upland and climate. Prairie ponds vary naturally in salinity depending on soil composition and diverse hydrological processes (Euliss & Mushet, 1999). Salinity alters macroinvertebrate community structure and increasing salinity reduces richness in prairie wetlands (Bortolotti, Vinebrooke, & St Louis, 2016;Euliss & Mushet, 1999) and other wetland types (James, Cant, & Ryan, 2003). Likewise, large DOC gradients are common in prairie wetlands and are likely driven more by in-pond processes (e.g., F I G U R E 3 Model-averaged (±unconditional SE) coefficients from the linear regression models in Appendix S4. All variables are scaled to enable direct comparisons among coefficients in both direction and magnitude. See Table 1 for a description of all variables production, respiration) than surrounding land use (Waiser, 2006).
Chironomids and odonates could respond to factors that may covary with a DOC gradient such as whole-system productivity, vegetation community composition or sediment characteristics (Bortolotti et al., 2016;Euliss & Mushet, 1999). The negative associations between TP and chironomid and odonate abundances coupled with positive associations with the amount of surrounding cropland were surprising, although wetland invertebrate abundance was positively associated with cropland cover in a recent study (Janke, Anteau, & Stafford, 2019). Positive associations between cropland and invertebrates could be driven by the association that more productive land tends to have higher agricultural intensity. TP concentrations tended to increase with surrounding cropland, but multiple ponds had high (>3 mg/L) TP and no cropland within 100 m of wetland survey points, suggesting that macroinvertebrates may respond to anthropogenic and livestock inputs and impacts that occur at larger scales. Riparian buffer strips can greatly improve the quality of agricultural wetlands by reducing nutrient loading, erosion and other contaminants entering the water due to surface run-off (Schulte et al., 2017;Vought, Pinay, Fuglsang, & Ruffinoni, 1995 ). Protecting and/or restoring riparian zones could also prevent shifts in higher trophic levels from specialized to generalized insectivores due to changes in relative abundances of primary producers (Blann, Anderson, Sands, & Vondracek, 2009). However, our finding of a synergistic relationship between grassland and TP on odonates is inconsistent with this theory. Despite the interaction being weak, higher grassland cover did not ameliorate the impact of higher TP on odonate abundance. Odonates may be responding to factors that we did not measure (e.g., grazing cattle entering wetlands or per cent cover of bare ground).
We also found a negative relationship of precipitation and synergistic relationships between cropland and total precipitation and between DOC and total precipitation on chironomids. Precipitation and climate play critical roles in the ecology of wetlands (Brooks, 2000;Eimers, Buttle, & Watmough, 2008), with heavy precipitation events known to depress or delay chironomid production at certain times of year (Euliss & Mushet, 1999) or to affect invertebrates via flushing out nutrients from exposed soils (Steinman, Conklin, Bohlen, & Uzarski, 2003), causing higher sedimentation (Gleason, Euliss, Hubbard, & Duffy, 2003) or diluting water chemistry (Eimers et al., 2008). It is possible that fewer chironomids found at sites with higher precipitation and higher cropland could represent a "flushing effect"-supported by the weaker three-way interaction with DOC that we found (Figure 3). Yet, under drier conditions (i.e., less precipitation), chironomids were more abundant at sites with higher cropland cover and lower DOC. This might be due to environmental factors that we did not measure, such as the composition of underlying sediments or the abundance of predators (Schindler, 2006). For instance, the depth of organic sediment may benefit certain taxa, including chironomids, and agricultural activities such as tillage and seeding operations may increase organic sedimentation rates (Cooper, Uzarski, & Burton, 2007). Given that >100 species of chironomids occur in Alberta, it is also possible that these patterns reflect abundance-species trade-offs associated with varying land use and climate conditions, or to species-specific responses to wetland chemistry (Saether, 1979). Odonates use different habitats within wetlands and may be less susceptible to sedimentation effects. In comparison with other freshwater systems such as streams and rivers, prairie wetlands support relatively low macroinvertebrate diversity and communities composed of ecological generalists that are relatively resilient to extreme environmental conditions as a result of a long history of agriculture, strong natural environmental gradients, including drought-deluge cycles in the region (Euliss & Mushet, 1999;Tangen, Butler, & Ell, 2003). In other regions less heavily impacted by climatic extremes and agriculture or other intensive land use, we may anticipate stronger relationships.
In the best model(s) for each response variable, the proportion of variance in our response variables explained by model predictors varied from 0.04 to 0.45 at most, suggesting that there were important variables missing from our models. Differences in prairie wetland hydrology, including variation in water depth and levels over time, may have influenced some of our results, at least for the invertebrate samples. For example, chironomids, odonates and birds in some feeding guilds may have been more abundant at sites with higher cropland cover and lower precipitation, because wetlands surrounded by more cropland may be more likely to be replenished by run-off than wetlands surrounded by more grassland. Grass roots have been shown to facilitate greater soil infiltration of water and reduce the time that water is on the surface to contribute to run-off (Van der Kamp, Hayashi, & Gallen, 2003).
At the same time, wetlands surrounded by more cropland may experience greater fluctuations in water levels (Euliss & Mushet, 1996), with fluctuations declining with wetland permanence and water depth (Johnson, Boettcher, Poiani, & Guntenspergen, 2004). Upland wetlands are more likely to be temporary and to While we lack drainage and water level fluctuation data for the wetlands in our study, we probably reduced some hydrological effects on our results by limiting our analyses to test the effects of fish-free wetlands <3 m deep.
| Avian species richness and functional groups
Positive associations of pasture and forages, grassland and wooded lands, and negative associations of cropland with most bird groups and overall bird species richness were consistent with previous avian studies in grassland communities globally (Azpiroz et al., 2012;Fuller et al., 1995;Stanton et al., 2018). As natural grasslands are converted to farmland, bird specialists decline and some generalist species benefit (Julliard, Clavel, Devictor, Jiguet, & Couvet, 2006;Kampichler, Turnhout, Devictor, & Jeugd, 2012). In Alberta, aquatic and terrestrial omnivores, terrestrial invertivores and terrestrial carnivores showed a positive association with cropland, and breeding birds such as some duck species, Red-winged Blackbird, and Ring-billed Gull (aquatic and terrestrial omnivores) and Long-billed Curlew (terrestrial carnivore) can increase in response to increases in hayfields and croplands (Clark & Weatherhead, 1986;Janke et al., 2019;Jobin, DesGranges, & Boutin, 1996). Patterns could also be associated with higher soil fertility in cropland. Area of wetlands also had a positive relationship on almost all of the aquatic associated bird functional groups, which typically breed or forage in wetland-rich areas (Steen & Powell, 2012).
Our findings suggest that climate change and recent climate may have a stronger influence than current land use on birds and while climate relationships are well documented, the additional effects of changes since the early 1900s constitute novel findings that merit further investigation as explained below. We initially predicted stronger F I G U R E 4 Interactions between land use, weather and climate change for aquatic macroinvertebrates (n = 280 sites) and bird functional groups (n = 337 sites). (a) Antagonistic relationship between salinity and non-woody vegetation on macroinvertebrate taxa richness. (b) Synergistic relationship between cropland and total precipitation on chironomids. (c) Synergistic relationship between total precipitation and DOC on chironomids. (d) Synergistic relationship between TP and grassland on odonates. (e) Synergistic relationship between cropland and temperature change on aquatic and terrestrial insectivores. (f) Antagonistic relationship between cropland and temperature change on terrestrial insectivores. (g) Antagonistic relationship between shrubland and temperature change on aerial insectivores. (h) Antagonistic relationship between trees and temperature change on terrestrial omnivores. Each of the three shaded regression lines represents the 20th, 50th and 80th data percentile or the 20th, 75th and 90th percentiles (if data are highly skewed) in the moderator with 95% confidence intervals. Confidence intervals for these predictions do not incorporate uncertainty in the estimate of variance for the random intercept 'Year', so may be slightly narrower than they should be. Response variables were plotted in log scale and the covariates were standardized ([x − mean]/SD) to better display the relationships between variables. Note that other covariates in these models (cf. Appendix S4) were set to their mean values land use effects (LeBrun et al., 2016;Scrimgeour & Kendall, 2003).
This unexpected result could reflect the relatively low number of studies that have evaluated simultaneously climate change effects relative to those of land use. Precipitation and/or temperature changes over time were consistently key determinants of bird species richness and abundances of specific functional groups, showing three times more positive relationships than negative. Richness and abundance of birds were highest at sites where precipitation and temperature increased the most since the early-mid-1900s, consistent with previous studies (Skagen & Adams, 2012). Other studies indicate that climate plays an important role in determining abundance, but effects tend to be habitat-and species-specific and may differ over a species' annual cycle and range (LeBrun et al., 2016;Lemoine, Bauer, Peintinger, & Böhning-gaese, 2007;Stephens et al., 2016). As the earth is warming, some migratory birds are arriving from the south and nesting earlier in North America and Europe (Butler, 2003). Many species are also moving to areas that have become progressively warmer and possibly wetter (Hitch & Leberg, 2007;Thomas & Lennon, 1999). Thus, if climate trends continue as projected, it is likely that the influence of local climate and climate change will overtake land use as the principal driver of bird populations (Forcey, Linz, Thogmartin, & Bleier, 2007;Lemoine et al., 2007), with evidence here that the shift has already begun in the Canadian prairies. Some species might also shift ranges to escape extreme temperature conditions, and the areas that have become progressively warmer and wetter in Alberta over the past century, may now be more attractive or suitable to birds. Another possible mechanism for this response is that there may be a long-term lag effect of precipitation or temperature change over time (since 1901)-where systems that experience progressively warmer, wetter conditions become more productive, diverse and take longer to develop more abundant biological populations (Pearson & Dawson, 2003).
In contrast, aquatic carnivores, aquatic omnivores, terrestrial carnivores and terrestrial omnivores were less abundant where temperature has increased the most. Likewise, in Europe, increasing temperatures associated with climate change led to both increasing agricultural intensification and reduced terrestrial invertebrate food sources and foraging habitat available to grassland and aquatic birds (Kleijn et al., 2010). Alternatively, these functional groups along with terrestrial carnivores and omnivores (like sparrows and blackbirds) usually nest on the ground or in low vegetation where intensified agricultural practices expose nests to more predators or destruction by farm machinery (Wilson, Whittingham, & Bradbury, 2005), or patterns could potentially be correlated with other factors. For instance, sites that experienced the greatest change in temperature and precipitation since 1901 are also the wettest in recent years and have fluctuated the least in rainfall. Aquatic invertivores, aquatic omnivores and terrestrial invertivores, on the other hand, were more sensitive to spring temperature or total precipitation; their abundances decreased with increasing spring temperatures and higher rainfall. Rather than affecting these bird groups physiologically, negative temperature relationships might be associated with changes in the distribution of other species (competitors, predators, parasites) or reduce habitat quality for the affected bird groups (Pearson & Dawson, 2003).
To date, few studies have examined the response of avian communities to interactions between habitat and climatic changes, and these generally focused on species range shifts in forests (Benning, LaPointe, Atkinson, & Vitousek, 2002;Guo, Lenoir, & Bonebrake, 2018;Melles, Fortin, Lindsay, & Badzinski, 2011), global predictions of species richness (Jetz, Wilcove, & Dobson, 2007;Storch et al., 2006) or the widely recognized synergistic effects between temperature, precipitation and habitat loss (Cox et al., 2013;Mantyka-Pringle, Martin, & Rhodes, 2012). We observed several land use-climate interactions on aquatic and terrestrial birds (combined) in agricultural landscapes. First, we detected that greater cropland cover at sites where temperature has increased the most over time resulted in lower abundance of aquatic and terrestrial insectivores. Higher temperatures over time have exacerbated the negative effects of cropland and habitat loss on bird abundances in other studies (Kleijn et al., 2010;Mantyka-Pringle et al., 2012).
However, we detected an antagonistic relationship between cropland and temperature change on terrestrial insectivores alone (which includes a variety of ground, shrub, tree, and cavity-nesting birds, mostly passerines). Higher temperature change ameliorated negative relationships of cropland on terrestrial insectivore abundance, a group that could have responded negatively to lower food supplies due to agricultural intensification (Benton, Bryant, Cole, & Crick, 2002;Wilson et al., 2005). We also found that shrubland cover and temperature change had positive relationships on aerial insectivores, terrestrial herbivores, arboreal herbivores and arboreal insectivores. Adverse impacts of shrub cover losses weakened where temperatures had increased the most over time. Finally, we identified opposing relationships between temperature change or total precipitation and tree cover. Temperature change and tree cover had negative relationships on aquatic omnivores and terrestrial omnivores like sparrows, blackbirds and shorebirds, which tend to be less abundant as woodland dominates the landscape (Bakker, Naugle, & Higgins, 2002). However, higher temperature change reduced the negative relationship of trees on the abundance of terrestrial omnivores possibly by enhancing terrestrial food sources, whereas higher temperature change and higher total precipitation synergistically increased the negative relationship of trees on aquatic omnivores and terrestrial invertivores, respectively. It is possible that higher temperatures provide some birds with greater food availability in cropland sites or sites with less species-specific natural habitat (Skagen & Adams, 2012). Evidence suggests that stressful conditions appear to drive local population dynamics (Parmesan, 2006), and the different responses observed by the avian functional groups probably relate to how their life history traits and physiology influence the ability of species to adapt to changes (Jiguet, Gadot, Julliard, Newson, & Couvet, 2007).
Despite our large sample size (i.e., sites), caution is required when interpreting correlative information because manipulative research is necessary to verify our findings. We also cannot rule out the possibility that the temperature and precipitation relationships are the result of other confounding spatial variables, even though we considered critical land use and weather variables (and spatial location-latitude and longitude) in our analyses. Further work is needed on soils, hydrology, topography, agricultural pesticides and other potential predictors (Kennedy, 1999
| Conservation implications
There is an urgent need to address multiple drivers of environmental change, given that interacting threats intensify biodiversity loss (Mazor et al., 2018). A better understanding of interactions can result in improved mitigation strategies, for example, by reducing the impact of local stressors that synergistically interact with global stressors such as climate change and affect biodiversity loss (Didham et al., 2007;Oliver & Morecroft, 2014;Zak et al., 2008). There will always be relative winners and losers with global change. Consequently, species with low adaptability and/or dispersal capacity are generally disproportionately negatively impacted (Heino et al., 2009;Walther et al., 2002). We therefore hypothesize that smaller-sized and/or more specialized organisms tend to respond more strongly to environmental and climate variation than larger and more generalist organisms because of their smaller-scale dependencies on water chemistry, climate and habitat heterogeneity. That we found several interactions between climate and land cover variables on birds also illustrates that each group responds according to specific habitat needs. Generalizing across taxa or even guilds and functional feeding groups can be problematic as we may miss important speciesspecific responses. One avian group of high conservation concern comprises aerial insectivores because recent population declines may be linked to changes in populations of flying insects (Michel, Smith, Clark, Morrissey, & Hobson, 2016;Nebel, Mills, McCracken, & Taylor, 2010). Now that we have highlighted an important antagonistic response between temperature change and shrubland on aerial insectivores, further work is needed to evaluate how aquatic and terrestrial insects are linked and how these insects respond to on-farm manipulations of natural habitats such as vegetation buffers along field margins and within wetland basins. Some of the patterns presented contrast with our predictions that higher temperatures would be associated with a decrease in overall richness of birds and aquatic macroinvertebrates, and higher rainfall would be associated with increased richness (Cox et al., 2013;Piggott et al., 2015), and these signal that region-specific climate patterns and climatic change may be just as important as local land use pressures and global trends.
Landscapes with a higher proportion of riparian vegetation provide more refugia for species already vulnerable to habitat loss and will only become more important as climate and land use effects intensify. Therefore, our discovery that riparian vegetation ameliorates the negative impacts of climate and water quality gradients on a variety of aquatic macroinvertebrates is key for mitigation. Increasing or maintaining riparian vegetation should be considered in future studies using land management experiments in agricultural environments. Government policies, however, should retain wetlands, areas of natural habitat and riparian buffers to reduce disturbances and the negative consequences from increasingly intensive agriculture.
ACK N OWLED G M ENTS
We thank the Alberta Biodiversity Monitoring Institute (http:// www.abmi.ca/home.html) for providing the biodiversity, habitat and human footprint data used in our analyses. We also thank two anonymous reviewers and the Editor, Gwen Iacona, for their thoughtful and constructive reviews. Funding was provided by a Mitacs Elevate | 2019-10-03T09:11:43.355Z | 2019-09-26T00:00:00.000 | {
"year": 2019,
"sha1": "bba42b213bc5684c46fcd8a564adbcfad884f7f6",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.12990",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "7ea9c08c461ec337d00ccf977368a5e4f748c9d8",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
255624227 | pes2o/s2orc | v3-fos-license | Environmental chemicals and endogenous metabolites in bile of USA and Norway patients with primary sclerosing cholangitis
Abstract Primary sclerosing cholangitis (PSC) is a complex bile duct disorder. Its etiology is incompletely understood, but environmental chemicals likely contribute to risk. Patients with PSC have an altered bile metabolome, which may be influenced by environmental chemicals. This novel study utilized state-of-the-art high-resolution mass spectrometry (HRMS) with bile samples to provide the first characterization of environmental chemicals and metabolomics (collectively, the exposome) in PSC patients located in the United States of America (USA) (n = 24) and Norway (n = 30). First, environmental chemical- and metabolome-wide association studies were conducted to assess geographic-based similarities and differences in the bile of PSC patients. Nine environmental chemicals (false discovery rate, FDR < 0.20) and 3143 metabolic features (FDR < 0.05) differed by site. Next, pathway analysis was performed to identify metabolomic pathways that were similarly and differentially enriched by the site. Fifteen pathways were differentially enriched (P < .05) in the categories of amino acid, glycan, carbohydrate, energy, and vitamin/cofactor metabolism. Finally, chemicals and pathways were integrated to derive exposure–effect correlation networks by site. These networks demonstrate the shared and differential chemical–metabolome associations by site and highlight important pathways that are likely relevant to PSC. The USA patients demonstrated higher environmental chemical bile content and increased associations between chemicals and metabolic pathways than those in Norway. Polychlorinated biphenyl (PCB)-118 and PCB-101 were identified as chemicals of interest for additional investigation in PSC given broad associations with metabolomic pathways in both the USA and Norway patients. Associated pathways include glycan degradation pathways, which play a key role in microbiome regulation and thus may be implicated in PSC pathophysiology.
Introduction
Primary sclerosing cholangitis (PSC) is a rare, chronic cholestatic liver disease characterized by inflammation and fibrosis of the bile ducts and impaired bile flow that leads to end-stage liver disease and hepatobiliary neoplasia. 1 Liver transplantation is currently the only evidence-based option for advanced disease-no drug therapy exists to improve transplant-free survival. 2 PSC likely develops from a combination of genetic and environmental contributors, but these are incompletely understood, either individually or together. [3][4][5] These complex interactions between environment and host have galvanized research into the exposome in PSC. 6 The exposome is defined as the cumulative environmental influences and corresponding biological responses throughout the lifespan. 7 While endogenous processes can be characterized using well-developed -omic technologies (eg, genomic, proteomic, transcriptomic, and metabolomic instruments), the ability to characterize environmental exposures on the -omic scale has been limited by challenges in measuring complex exposure profiles that potentially include thousands of exposure biomarkers. [7][8][9][10] However, recent advances in high-resolution mass spectrometry (HMRS) for small molecule profiling facilitate improved, -omic-scale investigation of the exposome. 11 This enables measurement and analysis of internal chemical doses and biological responses, with sufficient exposome coverage to investigate the complex relationships between potential disease drivers, biological effects, and clinical outcomes.
Through integrative analysis of HRMS-detected exposures and endogenous metabolic pathways, a relationship between chemical exposure and biological response has been identified in the plasma of patients diagnosed with PSC. 6 This suggests a critical role for environmental exposures in PSC pathophysiology. Given that PSC is a disease of the bile ducts, characterizing the exposome of bile, which directly contacts the diseased tissue, is imperative for advancing our molecular understanding of the disease. It is well known that the excretion of biotransformed chemicals (such as via glucuronidation and sulfation) into bile is a major metabolic elimination mechanism. Parent chemicals as well as conjugated metabolites may enter bile, 12 yet in PSC, these parent compounds and metabolic conjugates have not been identified. The only human PSC bile metabolomic study to date suggested aberrant bile formation in PSC (n ¼ 7), compared to individuals with noncholestatic end-stage liver disease (n ¼ 19), and nondisease controls (n ¼ 12). 13 However, those specimens were collected as part of a liver transplant procedure. Additional characterization of bile exposures from samples collected via endoscopic retrograde cholangiopancreatography (ERCP), a more representative procedure for bile collection, is warranted for better biological understanding of PSC.
In this work, we utilized a novel HRMS-based strategy to characterize environmental chemicals and endogenous metabolites present in the bile of patients with PSC, providing the first comprehensive exposome characterization of bile in complex liver disease ( Figure 1). We hypothesized that integrative network analysis between different geographical locations would (1) provide insights into the shared and distinct bile exposures of patients with PSC and (2) facilitate exploration of the role of environmental chemicals in any observed differences. The statistical interactions between environmental chemicals and endogenous metabolites derived from network analysis may inform potential mechanisms underlying the pathophysiology of PSC.
Study design and population
The sample comprised of 54 patients with PSC (n ¼ 24 who received care at Mayo Clinic in Minnesota, USA, and n ¼ 30 who received care at Oslo University Hospital in Oslo, Norway) ( Table 1). As collecting bile from individuals without liver disease is challenged by the invasiveness and risk of complications of ERCP, this cohort included only patients with PSC aimed at bile characterization. All patients met the diagnostic criteria for PSC according to the guidelines published by the American Association for the Study of Liver Diseases and the European Association for the Study of the Liver: (1) biochemical evidence of chronic cholestasis (!6 months); (2) cholangiographic findings of multifocal strictures alternating with segmental dilatations in the bile ducts and/or histological findings consist with PSC; and (3) causes of secondary sclerosing cholangitis have been excluded. 14,15 Medical charts of all patients were reviewed for the accuracy of PSC diagnosis and related clinical complications. For each patient, the following data were extracted: sex, age at the time of diagnosis of PSC, date of last known clinical follow-up, liver biochemistry measurement performed within 3 months of bile collection (alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, and total bilirubin), inflammatory bowel disease (IBD) status, progression to/development of clinically important endpoints (eg, compensated and decompensated cirrhosis, cholangiocarcinoma, liver transplantation, and development of colorectal cancer), and medications recorded at the time of bile collection (Table 1). Bile was collected during prescheduled ERCP as part of the patient's clinical care. Collected specimens were kept and transported on ice, centrifuged to remove debris, and aliquots were stored frozen at À80 C until use. Research procedures were conducted in accordance with the approval of the Institutional Review Board at the Mayo Clinic and the Research Ethics Committee at Oslo University Hospital. Written informed consent was obtained from all participants.
High-resolution exposomics
Environmental chemicals were measured in bile using gas chromatography high-resolution mass spectrometry (GC-HRMS) and liquid-chromatography high-resolution mass spectrometry (LC-HRMS). GC-HRMS was utilized as the primary environmental chemical platform as many environmental chemicals are hydrophobic, semi-volatile, and present ionization challenges with popular LC-HRMS methods. 11 LC-HRMS data (described in the "High-resolution metabolomics" section) were used as an additional data source for annotating environmental chemicals. 16 The analytes selected for targeted GC-HRMS analysis were based on a library of organic environmental chemicals that are widely used and occur frequently in the environment. This includes the common persistent organic pollutants (POPs) such as polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers, and pesticides. These bioaccumulate over time, disrupt metabolic and endocrine function, and have a high toxicity. 17 In addition to the routinely biomonitored chemicals (eg, by the National Health and Nutrition Examination Survey program), we also included contemporary contaminants such as polycyclic aromatic hydrocarbons (PAHs), insecticides/pesticides, flame retardants, plasticizers, flavoring agents and food additives, phthalates, and chemicals used for personal care. It is hypothesized that these may be present in bile as bile provides a major route for metabolic elimination of conjugated chemicals (eg, through glucuronidation or sulfation) and parent compounds. 12 These chemicals may also be subject to enterohepatic circulation mediated by the bile, increasing their retention time (RT) within the bile ducts, liver, blood, and digestive system.
Briefly, for GC-HRMS sample profiling, 13C-labeled chemical standards, each with 99% isotope enrichment, were spiked at a final concentration of 1 ng/mL for quality control and assurance, as previously reported. 11,18 Environmental chemicals in 150 mL bile samples were extracted with 50 mL formic acid followed by 200 mL hexane-ethyl acetate (2:1 v/v, !99% pure, Sigma-Aldrich). The chilled mixture was shaken vigorously and centrifuged to obtain the organic supernatant, which was further cleaned with high-purity MgSO 4 . MgSO 4 provides similar efficacy and similarly high reproducibility for cleaning compared with dispersive solid phase extraction. 11 The bile extracts were analyzed with three injections using GC-HRMS with a Thermo Scientific Q Exactive GC hybrid quadrupole Orbitrap mass spectrometer with 2 mL per injection. Data were collected from 3 to 24.37 min with positive electron ionization mode (þ70 eV), scanning from m/z 85.0000 to 850.0000 with a resolution of 60 000. National Institute of Standards & Technology Standard Reference Materials (SRM) 1958 and SRM-1957 were analyzed in every batch of 20 samples to support quality control and batch effect evaluation. Contamination and carryover were assessed in isooctane washes, solvent blanks, and method blanks, which were run at the beginning of each batch by monitoring of peak baseline. Raw data were extracted using XCMS. 19 Ninety-two environmental chemicals met the criteria for "Level 1" identification 11 by comparison of accurate mass, fragmentation patterns, and RT to an in-house library of authentic standards run on the same instrument using identical analytical parameters. 11 Average peak intensities of the three technical replicates per sample were used to quantitatively represent levels of environmental chemicals.
High-resolution metabolomics
Untargeted, LC-HRMS profiling of bile was completed in batches of 40 study samples using established methods with two platforms, C 18 chromatography with negative electrospray ionization (ESI), and hydrophilic interaction liquid chromatography (HILIC) with positive ESI, as described in detail. 20 Briefly, 65 mL bile aliquots were treated with two volumes of ice-cold acetonitrile to precipitate the proteins. A mixture of 10 stable isotope internal standards was included for quality control as previously reported. Following 30 min incubation on ice, samples were centrifuged for 10 min at 16 100g at 4 C. The supernatants were analyzed with dual chromatography-coupled HRMS (Thermo Scientific HF Q-Exactive). The HRMS was operated in full scan mode at 120 000 resolution and mass-to-charge ratio (m/z) range of 85-1275. Raw data files were extracted and aligned using the R package apLCMS 21 with modifications by xMSanalyzer (for details, see Supplementary materials). 22 Amongst additional functions, xMSanalyzer evaluates the quality of each feature and removes the low-quality features. 22 For example, features with <75% correlation amongst the three technical replicates were deemed low quality and removed. Uniquely detected peaks consisted of m/z, RT, and ion abundance referred to as metabolite features. Peak extraction detected 9735 C18 and 1522 HILIC metabolite features. For quality control purposes, a 10% feature missingness threshold was employed, leaving 3526 C18 features and 5978 HILIC features for inclusion in subsequent analyses. Peak annotation for endogenous metabolites was performed following metabolome-wide association study (MWAS) (described under the "Statistical analysis" section) using the mummichog 2.0 algorithm 23 on Metaboanalyst 24 and the Homo sapiens MFN pathway library, a manually curated library that originates from numerous sources including KEGG, BiGG, and Edinburgh Model. 24 Peak annotation for environmental compounds in LC-HRMS data was conducted using xMSannotator 16 with the Human Metabolome Database (HMDB) (for details, see Supplementary materials). 25 xMSAnnotator uses a multi-stage clustering algorithm to derive compound annotation and confidence scores, which range from 0 (no confidence) to 3 (high confidence). 16 Chemical annotations derived from xMSAnnotator with high or medium confidence scores (!2) and with the M þ H adduct (positive mode) or MÀH (negative mode) are equivalent to the Level 2 confidence score by the Mass Spectrometry Imaging (MSI) criteria. 26 Lower confidence annotations (MSI Level 4) were derived from HMDB and the Metlin mass spectrometry databases at 5p.p.m. tolerance.
Statistical analysis
All statistical analyses were implemented in R version 4.0.3 27 using RStudio version 1.3. 28 Figure 1. Conceptual overview. (A) Bile samples were collected from patients with PSC located in the USA and Norway. Samples were assayed for environmental chemicals and metabolites using GC and LC-HRMS. (B) Analytical pipeline. Intensities of 92 chemicals were characterized and compared across sites. Metabolomic pathway analysis was performed, and pathways were compared for enrichment by the site. All identified chemicals and pathways were integrated using a network science approach to derive chemical-metabolite association networks that best characterize each site. Site-specific analyses were done given observed differences in chemical intensities and metabolomic pathway enrichment by site. Figure created with the help of Biorender.com.
Exposomic analysis -Environmental-wide association study (EWAS)
GC-HRMS and LC-HRMS assayed exposures were analyzed separately because the GC-HRMS workflow produced 92 confidently identified environmental compounds, while the LC-HRMS workflow produced annotations for environmental compounds. Peak intensities were log2 transformed and standardized using their median and interquartile range prior to all statistical analyses. Following transformation and standardization, hierarchical clustering using Euclidean distance and complete linkage was performed on both patients and GC-HRMS identified chemicals, by site, to assess whether groups of patients with similar clinical and demographic features would cluster by bile chemical profiles. Next, multiple linear regression was used to evaluate the association of environmental chemicals with geographical location (EWAS). In this EWAS, for each chemical, the log2-transformed intensity was modeled as a function of location (USA or Norway), controlling for age, sex, and duration of PSC, which are known to influence biochemical concentrations and/or disposition. [29][30][31] To reduce the number of false positives, all chemicals associated at FDR <0.20 with the location were considered significant. Additionally, given the high comorbidity of IBD with PSC, 32 a second, exploratory analysis was conducted to assess potential associations (FDR < 0.20) 6 of GC-HRMS-identified environmental chemicals with IBD status. This analysis controlled for patients' location, sex, age, and duration of PSC. LC-HRMS-annotated environmental exposures were manually curated based on accurate mass matches to dietary, environmental chemical, and microbiome metabolites from xMSannotator. 16 Metabolome-wide association study MWAS was performed to identify site-associated metabolic features (reported by m/z and RT). 33 Data pre-processing and analyses were performed separately for the C18 and HILIC columns. 6,34 Multiple linear regression was utilized to model the log2 feature intensity as a function of site (USA or Norway), controlling for age, sex, and duration of PSC (as in EWAS). Due to the large number of features, an FDR threshold of <0.05 was used to account for multiple testing and to reduce false positives. This more stringent LC-data FDR threshold compared with the GC threshold (FDR < 0.20) was implemented to further reduce the possibility of false positives in the LC analyses, which were based on the untargeted chemical intensities, compared to the GC analyses based on identified chemicals.
Metabolomic pathway analysis
Pathway analysis was performed using Mummichog 23 implemented through MetaboAnalyst. 24 Mummichog enables the identification of pathways enriched by a condition (presently, geographical location) from untargeted metabolomics data without a priori identification of metabolites. Mummichog predicts metabolite identity and calculates pathway enrichment using Fisher's exact test. 23 A list of all detected features which passed the 10% feature missingness threshold (for a combined total of 9506 features from C18 and HILIC chromatography) was imported to Mummichog. Features were ranked by their MWAS statistical significance. Pathways that were differentially enriched by location in features meeting an FDR-adjusted MWAS significance threshold of 0.05 were identified. All significantly different pathways were required to contain at least three mapped metabolites meeting the FDR threshold of 0.05. Similarly enriched pathways (by the same cutoff) were also identified. Analysis was performed with a mixed ion mode, with a mass tolerance of 5 p.p.m., with RT present, and with primary ions enforced.
Metabolomics-Exposomics integration analysis
The exposome-metabolome network analysis aimed to identify associations between environmental chemicals and metabolomic pathways that best characterize the bile content of patients with PSC by geographical location. This facilitates an understanding of the common and distinct composition of PSC bile at different geographical locations. Inputs to the analysis included the 92 environmental chemicals assayed and the 95 metabolomic pathways identified in pathway analysis, all of which were adjusted for age, sex, and duration of PSC. Pathways were represented by principal component 1 of all pathway metabolites. 6 The analysis was completed using xMWAS, 35 which provides an automated framework for integrative and differential network analysis. Pairwise integration between chemicals and metabolomic pathways was performed through a canonical sparse partial least squares (sPLS) regression analysis. All associations jrj! 0.6 and a Bonferroniadjusted value of P < 5.72  10 À6 (.05 divided by [92 chemicals  95 pathways]) were retained and visualized using Cytoscape. 36 Communities of tightly correlated chemicals and pathways were detected by multilevel community detection. 37 The assumption underlying community detection is that communities comprised functionally related molecules. 35 Networks and communities were visualized to compare the associations of environmental chemicals and metabolomic pathways by the geographical site.
Demographic and clinical characteristics
We summarize patient characteristics in Table 1. The sample comprised of 46% and 50% women in the US and Norway groups, respectively. Patients had a similar median age at PSC diagnosis (40 years of age in the USA and 37 years of age in Norway). There was no difference in the prevalence of IBD between cohorts. The most prescribed IBD medication in both cohorts was Mesalamine (Asacol). Furthermore, there were no differences in the rates of clinically important endpoints between the two cohorts (see comorbidities, Table 1). No patients had received a liver transplant before the time of bile collection. At the time of bile collection, the Norway patients were on average slightly younger and had lower rates of antidepressant medication, antihypertensive medication, and vitamin/supplement use. Overall, 69% of the patients in these samples had comorbid IBD, which is consistent with the literature. 32
Exposomic analysis -EWAS
Nine environmental exposures identified using GC-HRMS were associated with geographical location (FDR < 0.20) (Figure 2; Supplementary Table SI). These include pesticide and insecticide compounds (alpha-BHC, bioallethrin, prothiofos), a PAH (fluorene), and five PCBs congeners. Levels of all of these chemicals were higher in patients in the USA compared with Norway.
Hierarchical clustering showed no patient clustering patterns by sex, age group, duration of PSC, Crohn's disease status, ulcerative colitis status, ursodiol prescriptions, or vitamin supplementation. For example, men did not separate from women through clustering (and likewise, the remaining variables did not show separation by groups) (Figure 2C Table SI). The exploratory analysis assessing associations of environmental chemicals with IBD status controlling for location, sex, age, and duration of PSC demonstrated that no compounds were associated (FDR <0.20) with IBD status, although six were associated at a nominal P < .05 (Supplementary Table SII). Ninety-seven of the 241 LC-HRMS-annotated environmental compounds met the quality control criteria of having <10% feature missingness. Of these 97, 22 were significantly different between the USA and Norway patients (Supplementary Table SIII). These annotated chemicals include drugs (Cotinine methonium ion), nutritive compounds (Vanillic acid), and environmental chemicals (Benzofuran).
Metabolomic differences by site
Following a 10% feature missingness threshold, 3526 features from the C18 chromatography column and 5978 features from the HILIC column were assessed for association with the geographical location through MWAS (described in the "Metabolome-wide association study" section). A total of 581 C18 features and 2562 HILIC features met an FDR threshold of 0.05, indicating that their intensity could be modeled through linear regression as a function of the geographical site, accounting for age, sex, and duration of PSC (Figure 3).
Metabolomic pathway analysis
Pathway enrichment analysis was performed using mummichog, 23 which infers pathway activities from a ranked list of mass spectrometry peaks that were derived through MWAS. Ranked features were imported into mummichog, then pathway enrichment of top-ranked features (FDR < 0.05 associating with geographical location) was calculated. Fifteen pathways were significantly enriched in top-ranked features by geographical location (P < .05), and 80 were similarly enriched between sites ( Figure 4A for differential pathways; for all pathways, see Supplementary Table SIV). The differentially enriched pathways fall under broad categories of amino acid, glycan, carbohydrate, and vitamin/cofactor metabolism. Concentrations of putative metabolites localizing to these 15 differential pathways were both increased and decreased in the USA patients, depending on the metabolite ( Figure 4B; Supplementary Table SV). Of these 15 pathways, compounds in the tyrosine metabolism pathway had the highest fold-change differences (both higher and lower) in patients across locations ( Figure 4B).
Exposome-Metabolome integration analysis
The integrative network analysis was performed to characterize associations between identified environmental chemicals and metabolic pathways in bile and to compare these associations by geographical location (Figures 1 and 5). A canonical sPLS regression approach enabled pairwise integration of the 92 identified Additionally, communities of highly associated pathways and chemicals within networks were detected.
Four communities comprising a total of 33 pathways were associated with one or multiple of six chemicals in the USA patients. The chemicals represented on the USA network include five, which are significantly higher in the USA than the Norway patients by EWAS (PCB-101, PBC-87, PBC-118, and bioallethrin), and one that was detected at comparable levels in the USA and Norway patients (2-monobromodiphenylether). The metabolic pathways associated with these chemicals included those with significantly different enrichment by geographical location (in the broad categories of amino acid metabolism, carbohydrate metabolism, and glycan biosynthesis and metabolism) and with similar concentrations.
Comparatively, fewer environmental chemicals are associated with fewer metabolic pathways in the Norway network. Only 3 communities comprising only 11 pathways were associated with one of three chemicals in Norway patients ( Figure 5). PCB-118, PCB-101, and quintozene were retained in the Norway network. These were associated with glycan biosynthesis and metabolism and energy metabolism pathways, which were differentially enriched between sites, as well as pathways that were similar in the USA and Norway bile samples.
Four metabolic pathways were similarly enriched between the USA and Norway patients and associated with environmental chemicals in each sample. The specific chemical-metabolome associations differed by site. In the Norway sample, caffeine metabolism, n-glycan degradation, and glycosphingolipid biosynthesis (ganglioseries) were associated with PCB-101, while these pathways were associated with the bioallethrin and 2-monobromoeiphenlether community in the USA sample. Bile acid biosynthesis was associated with quintozene in the Norway sample and PCB-118 in the USA sample. Network statistics can be found in Supplementary Tables SVI and SVII. Lastly, the nontargeted analysis study reporting tool was utilized to evaluate all study designs and reporting procedures (Supplementary Table SVIII). 38
Discussion
This is the first comprehensive characterization of the bile exposome in patients with PSC. Characterization of bile in PSC is critical, as bile directly contacts the diseased bile ducts. Through state-of-the-art HRMS technology-and network-based analytical approaches, patients with PSC located in distinct geographical regions were found to have shared and differential environmental chemicals, endogenous metabolites, and chemical-metabolomic associations in bile. The derived chemical-metabolomic associations are an important step in understanding the biochemical changes that coincide with environmental chemical exposure in PSC, as they may reflect mechanisms toward disease pathogenesis or progression. Therefore, the present findings serve as a starting point that highlights key exposures and principles toward understanding the interplay between the environment and host in the bile of patients with PSC.
The MWAS found 3143 of 12 647 ($25%) features to differ between sites, and pathway analysis demonstrated that 15 of the 95 metabolomic pathways were differentially enriched by the geographical site. Thus, this first characterization of metabolomic content by geographical site suggests heterogeneity of bile metabolomic content in patients with PSC based purely on the geographical location. These differences may stem from different environmental exposures, lifestyle variance, or a combination of the two.
This work is the first to reveal the diverse range of environmental chemicals in human bile. Numerous human exposure assessment studies have demonstrated that these chemicals, especially the persistent contaminants, can be detected in various biospecimens and confer adverse effects in several tissues (eg, neurotoxicity and nephrotoxicity). We speculate that the chemicals detected in bile also will affect the liver (the primary site of biotransformation), the digestive system (the primary source of chemical ingestion through food and water), and the bile ducts (through direct contact). It is noteworthy to mention that many chemicals are subject to enterohepatic circulation mediated by the bile, which increases the RT and chemical burden in the liver, blood, digestive system, and bile ducts. Eighty-three environmental chemicals were detected at statistically similar concentrations in patients across the two geographic sites. At both sites, DBP had the highest median bile concentration. DBP is an endocrine disruptor that associates with splenic toxicity, obesity, and type II diabetes, with no current known associations with PSC. 39,40 Interestingly, DBP is used for enteric coating in certain formulations of mesalamine (Asacol, Asacol[HD]), 41 a drug used to treat IBD, the most common comorbidity in this population. DBP-containing IBD medications were the most prescribed IBD medications in both the USA and Norway samples. Given the known associations between DBP and disease and the high DBP bile concentrations in these samples, future investigations are warranted to study whether (1) DBP in bile contributes to the development of PSC and (2) IBD pharmacotherapy promotes high DBP bile concentrations.
The bile concentrations of five PCB congeners (PCB-87, PCB-99, PCB-101, PCB-110, PCB-118), three pesticide/insecticide compounds (bioallethrin, prothiofos, and alpha-BHC), and a PAH (fluorene) differed by location in these patients. For all of these, concentrations were higher in patients in the USA compared with Norway. The effect of higher environmental chemical concentrations appears to be increased crosstalk with metabolomic activity, represented through network analysis by the larger number of chemical-metabolomic associations in the USA patients compared with Norway. Thus, upon entry of environmental compounds into bile, the bile ducts encounter not only those environmental compounds but also all associated metabolites. Whether these exogenous agents, the associated endogenous metabolites, or the combination of the two directly harm the bile ducts should be explored in future functional experiments.
The network analyses enable assessment of the chemicalpathway associations which exist in patients at both geographical sites. No chemical-pathway associations observed in the USA patients were also observed in the Norway patients. However, pathways represented in the USA network (without their USA network-associated environmental chemicals) and environmental chemicals of the USA network (without their USA networkassociated pathways) were observed in the Norway patients.
Specifically, chemicals that were associated with metabolomic activity in both cohorts include PCB-118 and PCB-101. PCBs are highly stable organic chemicals that were widely manufactured in plasticizers, paints, and electrical equipment until they were banned by the Stockholm Convention on POPs. PCB-118 is known to promote the development of cholangiocarcinoma, hepatocholangioma, and hepatocellular adenoma in rats. 42,43 Cholangiocarcinoma is the most common malignancy in patients with PSC. 44,45 PCB-101 associates with fatty liver diseases. 46 Whether PBC-118 and PCB-101 promote the development of PSC, and whether this is mediated by metabolomic activity of pathways represented in the network analyses, warrant future investigation.
In the Norway cohort, PCB-118 most highly associated with two differentially enriched pathways, heparan sulfate degradation and chondroitin sulfate degradation, both of which are glycan degradation pathways. PCB-101 most highly associated with one differentially enriched pathway, keratan sulfate degradation (an additional glycan degradation pathway), as well as N-glycan degradation. This contrasts with the USA cohort, where PCB-118 and PCB-101 associate broadly with a larger number of diverse metabolomic pathways. In the USA cohort, the glycan degradation pathways (heparan sulfate degradation, N-glycan degradation, chondroitin sulfate degradation) were associated most strongly with 2-monobromodiphenylether and bioallethrin. The differential chemicalpathway associations across networks may reflect differences in chemical concentrations or chemical-chemical interactions. Of note, however, is the fact that glycan degradation pathways were associated with one or multiple environmental chemicals in both samples of patients. This indicates that metabolomic activity in these pathways may have multifactorial chemical contributors dependent on chemical concentrations or chemical mixtures that these patients are exposed to. Glycans are complex oligosaccharides, which modify proteins, and glycan degradation is one of the major metabolic processes to shape the composition of the gastrointestinal microbiome. 47 The high comorbidity of PSC with IBD has led to accumulating evidence of altered gastrointestinal microbiome in the pathogenesis of PSC. [48][49][50][51] Given the relevance of glycans to PSC pathophysiology, the chemicals and chemical mixtures characterized in this work which may affect glycan degradation (PCB-101, bioallethrin, 2-Monobromodiphenylether) warrant additional investigation.
To assess whether the metabolic pathways represented in these networks were enriched in the plasma of an independent cohort of patients with PSC, comparisons were drawn between Figure 5. Multi-omics integration. Integrated networks of environmental chemicals and metabolomic pathways stratified by location. Arrow-shaped labeled pathways represent those that are differently enriched (P < .05) by the site. Large circular pathways represent those that are similarly enriched (P > .05) by the site and represented in both the USA and Norway networks. Pathways with smaller circles labeled as 'P#' are similarly enriched (P > .05) by the site and represented on either the USA or the Norway network. Pathway number corresponds to the pathway analysis results, ordered by the significance of differential enrichment between sites. the present analysis and a recent case-control plasma PSC study. 6 None of the compounds (n ¼ 12), which significantly differentiated patients with PSC (n ¼ 80) from healthy controls (n ¼ 40) in the plasma study were assayed in the present work. This highlights the need to determine relevant biomarkers of interest to be explored in multiple physiological compartments (eg, bile, plasma, liver) in future studies.
There are limitations to this study. This study considered 92 environmental chemicals identified by GC-HRMS, providing the first such characterization of bile in patients with PSC. However, it is estimated that more than 100 000 chemicals are present in the environment 8 and that any given individual may have current or past exposures to thousands of chemicals. Therefore, there may be additional chemicals present in the bile of patients with PSC that are below current detection limits or were, due to a transient nature, not present at the time of sampling. Current technologies limit the extent of environmental chemical detection and must continue to evolve to enable large-scale assessments. Additionally, while PSC is a rare disease and collecting bile via ERCP is challenging, the sample size was relatively small. Larger cohorts are necessary to validate the characterized associations between environmental chemicals and metabolomic pathways. Given that this study included PSC cases only, it remains unclear if the presence of environmental and endogenous chemicals in bile fluid is causally or coincidentally related to liver disease. The inclusion of healthy controls requires the performance of an ERCP-an invasive procedure with no benefit and a real risk to the participant, conferring significant challenges to the collection of appropriate control samples. Additionally, these analyses are correlational in nature, and associations between chemicals and metabolic pathways do not necessarily imply causative effects. It is therefore possible that the differential chemical concentrations observed between geographical sites are mediated by sociodemographic factors not collected in the present work (eg, diet, physical activity, occupation, body fat percentage). Mechanistic studies in laboratory animals or in vitro systems are necessary to determine the cause-response relations between these molecules.
In conclusion, this novel study provides the first characterization of the exposome in the bile of patients with PSC. The study demonstrates that it is possible to measure dozens of environmental chemicals in human bile. The results show the heterogeneity of bile in PSC, with shared and variable endogenous and exogenous factors relating to geographical location. Higher concentrations of environmental chemicals in the USA cohort are associated broadly with endogenous metabolic pathways, suggesting functional crosstalk. Derived associations between glycan degradation pathways with environmental chemicals suggest a potential interaction of the gut microbiome with the metabolome and exposome in patients with PSC in a chemical concentration-dependent manner. Future casecontrol and longitudinal studies are warranted to further elucidate the endogenous and environmental contributors to PSC, which may ultimately guide necessary pharmacotherapy development in PSC. | 2023-01-12T16:49:26.808Z | 2023-01-05T00:00:00.000 | {
"year": 2023,
"sha1": "63a433c5557f550e06c2cd2e759aa7efeabe668f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bd0c8a219a55a17f4f0d237b33a931c8239e2f40",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254700082 | pes2o/s2orc | v3-fos-license | Humor Types Show Different Patterns of Self-Regulation, Self-Esteem, and Well-Being
Humor styles have been found to be associated with well-being, however, no study has addressed the distinct well-being associations of combinations of humor styles, that is, humor types, yet. The present study thus aimed at investigating which combinations of humor styles exist and to which extent these humor types are associated with well-being. In an online questionnaire, the Humor Styles Questionnaire (HSQ, Martin et al. J Res Pers 37:48–75, 2003), self-regulatory strategies, self-esteem, and well-being instruments were administered to a German sample. Exploratory and confirmatory factor analyses replicated the underlying structure of the HSQ. With hierarchical clustering, we found evidence for three humor types (endorsers, humor deniers, and self-enhancers), which differed in group means for self-esteem, self-regulatory strategies, and well-being. Findings provide further evidence for the positive well-being correlates of self-enhancing humor, and distinctly address the positive correlates of aggressive and self-defeating humor being absent. It is discussed that humor styles cannot be conceptualized as beneficial or detrimental per se, but have to be regarded in context.
might help in overcoming a stressful life situation. Humor, conceptualized as a habitual behavior pattern with the general tendency to laugh or tell funny stories, is a multifaceted construct that might be used, for example, to cheer up others as well as oneself or to engage in personal relations (see Martin et al. 2003, for a review of literature). One might also differentiate humor-related behaviors by the manner in which the humor is delivered, for example, if humor is used to devaluate oneself or others or to appraise one's or others' abilities, respectively. The manner in which humor is delivered is widely accepted as disposition and therefore as certain style of humor. There is reason to believe that humor plays an important role in explaining well-being. Several sayings, like the one introduced earlier, remind us of the ''healing nature of laughter'' or the effectiveness of ''coping with humor''. Empirical evidence, particularly by Martin et al. (2003), shows important associations of humor styles with well-being. However, humor as a psychological construct is characterized by styles that are closely interrelated but not equally adaptive for well-being (e.g., Martin et al. 2003;Ruch 2007). By investigating different constellations of humor styles, new associations might emerge and advance the understanding of humor styles and their association with well-being. Therefore, the present study will develop and use a typology of humor styles similar to the approach of Galloway (2010). Building upon the framework of Martin et al. (2003), and in an attempt to further clarify the associations between humor styles and their contribution to promote well-being, this contribution will investigate how humor types are related to self-regulatory strategies, quality of life, and well-being.
Typology of Humor Styles
To assess differences in humor styles, Martin et al. (2003) developed the Humor Styles Questionnaire (HSQ), an instrument designed to assess habitual humor-related behavior patterns, that is, different styles of humor. They distinguish four humor styles on the two continua ''humor to enhance self versus relationships with others'' and ''benign versus potentially detrimental humor''. To define these humor styles, Martin et al. (2003) illustrate the humor styles with their potential outcomes in terms of well-being and social interactions. Firstly, affiliative humor reflects a humor style that is used to enhance one's relationships with others in a relatively benign way. It is the tendency to tell jokes and funny stories, in order to amuse and laugh with others. Self-enhancing humor refers to humor to enhance the self in a tolerant way and is the tendency to maintain a humorous outlook on life to cheer oneself up . Aggressive humor is a hostile form of humor to enhance the self at the expense of others and included sarcastic or criticizing humor. Lastly, self-defeating humor is used to enhance relationships with others at the expense and detriment of the self. A self-defeating use of humor is to make fun of oneself for the enjoyment of others, that is, to use humor in a self-disparaging way, or laughing along with others when being made fun of (cf. Chen and Martin 2007). Although there are many concurrent approaches that aim at assessing humor as a form of creativity, i.e., productive ability (e.g., Brodzinsky and Rubien 1976) or as moral value (Ruch et al. 2010a, b), dispositional humor styles have been found to be validly assessed by the HSQ. For example, humor styles have been shown to distinctly correlate with the ''dark triad'' traits of personality, namely aggressive humor with narcissism, Macchiavellism, and psychopathy (Veselka et al. 2010). However, it is unclear why only singular, one-dimensional associations of humor styles with well-being and personality measures have been investigated to date. First, one-dimensional associations of humor styles with well-being might be inconclusive because different motivational strivings could underlie the use of humor.
For example, aggressive humor can be seen as antisocial and detrimental for social interactions, but could also be useful in enhancing one's feelings of being superior to others or useful in keeping one's place in the social hierarchy, which both might involve a sense of competence, control, and well-being. Self-defeating humor, as a second example, might be useful to (re)negotiate one's place in the social hierarchy, to amuse others by making a fool of oneself, and therefore, in general, to affiliate with others. In this line of reasoning, different humor styles might be combined according to motivational strivings. Second, one-dimensional approaches might be less fruitful than multi-dimensional approaches considering that it is highly implausible that individuals make use of only one certain distinct humor style. Nonetheless, there are-to the best of our knowledge-no assumptions about individual patterns in the use of humor styles. Thus, an exploration of the individual differences in combining the four humor styles would certainly enhance knowledge in this domain. A study conducted parallel to the present contribution provided valuable results in an Australian sample on regrouping humor styles into more broader categories of humor types (Galloway 2010). Galloway showed several associations of the humor types with personality trait measures as a means to explain humor types. A further comparison of the present and Galloway's study will be provided in more detail later. In sum, there are several reasons for preferring multi-dimensional over one-dimensional approaches in the investigation of humor styles, which should thus not be considered as beneficial or detrimental per se. In the present contribution, we therefore aim at further enhancing knowledge on the combinations of dispositional humor styles by investigating which combinations of humor styles exist and thus to develop a typology of humor styles. Investigating the assocations of these humor types with quality of life and well-being measures might advance the understanding of the contribution of humor in explaining quality of life and well-being.
Humor and Well-Being
Concerning the beneficial versus detrimental nature of humor, what do we know about the associations between humor and well-being so far? Empirical evidence on links between physical health, humor, and laughter is ''weak and inconclusive'' (Martin 2001) and some components of humor even seem detrimental for physical health (Kerkkanen et al. 2004). However, much research has been conducted concerning the associations between components of humor and psychological well-being (e.g., Kuiper et al. 2004;Lefcourt and Thomas 1998;Marziali et al. 2008;Olson et al. 2005;Thorson and Powell 1993;Yip and Martin 2006). In general, humor seems to facilitate psychological health and well-being (Thorson et al. 1997) and seems to buffer the impact of stressful life events (Nezu et al. 1988). The positive relationship between sense of humor and well-being is moderated by personality constructs, for example feelings of agency and communion (Kuiper and Borowicz-Sibenik 2005). Also, humor styles have been shown to mediate the association between self-evaluative standards and psychological well-being (Kuiper and McHale 2009). In the present study, we will focus on humor styles and take into account Martin et al.'s findings (2003): They examined the four humor styles regarding their associations with well-being and found multiple significant associations with different well-being measures, especially for selfenhancing and affiliative humor. There, affiliative humor was positively associated to measures of well-being and self-esteem, and negatively related to anxiety and depression. Self-enhancing humor showed the same, and an even stronger, correlation pattern, but was also significantly associated with optimism. Aggressive humor was associated with aggression and hostility, but not with well-or ill-being measures of any kind. Self-defeating humor, lastly, correlated highly positively with anxiety, depression, hostility, aggression, and psychiatric and somatic symptoms, and, negatively, with self-esteem and well-being. These correlation patterns have for the most part been replicated in an Armenian sample (Kazarian and Martin 2006). However, aside from findings on straightforward associations of humor styles with well-being and pathology, it is unclear to which extent individuals are equipped with different combinations of humor styles and if these combinations of humor styles show different associations with well-being. It is highly plausible, considering the moderate, but not too strong associations between the humor styles, that individuals differ in the constellations of humor styles. Differential constellations of humor styles might lead to differential associations with well-being. For example, it is intuitively likely that high levels of aggressive humor in combination with high self-enhancing humor are beneficial, whereas aggressive humor in combination with high self-defeating humor is detrimental for wellbeing. However, these differential associations might not be visible in correlative, meanlevel analyses. Therefore, we believe that it is important to use a person-centered, typological approach to investigate combinations of humor styles, that is, humor types. These humor types will then be investigated with regard to associations with different well-being measures. First, self-esteem has been found to be an important resource for well-being dependent upon humor styles . The construct life satisfaction, also investigated earlier in humor research, represents the cognitive-evaluative component of quality of life or well-being (Diener et al. 1985). Further extending the study of humor and its associations with self-esteem and well-being, we were interested in examining associations of humor and self-regulatory strategies. Self-regulatory strategies are behavior patterns concerning the pursuit of goals and can be differentiated in the general tendencies to pursue goals even in the face of obstacles (tenacious goal pursuit) or to adjust personal goals when faced with situational constraints (flexible goal adjustment; Brandtstädter and Renner 1990). To our knowledge, no study has investigated the associations of humor styles and self-regulatory strategies yet. Self-regulatory strategies have been conceptualized as resources that have been shown to be differentially associated with well-being (Brandtstädter and Greve 1994;Forstmeier and Maercker 2008;Wrosch et al. 2003). Self-regulatory strategies are regarded as important precursors to well-being even in the face of adverse circumstances (Brandtstädter and Greve 1994). Both self-regulatory strategies and humor styles have been conceptualized as disposition ''buffering'' stressful events and as means to cope with adversities. Further, both affiliative and self-enhancing humor styles and flexible goal adjustment share a positive reinterpretation of a (perhaps adverse) situation. Therefore, it can be assumed that affiliative and self-enhancing humor styles (and humor types characterized by these styles) are positively associated with flexible goal adjustment. The relationship of humor styles and humor types with tenacious goal pursuit will be investigated without specific assumptions.
Research Questions
Our study is guided by three steps: First, the structure of the Humor Styles Questionnaire found by Martin et al. (2003) will be replicated in a German sample to ensure comparability of results. Secondly, patterns of humor styles will be investigated guided by the question if individuals can be grouped according to different combinations of the four humor styles and develop a typology of humor styles. Lastly, the research question will be explored if humor types differ with respect to associations with self-regulatory strategies, and in their contribution to explaining self-esteem and well-being.
This research question was investigated in a sample of adolescents and adults in young and middle adulthood. In exploratory and confirmatory analyses, the factor structure of the HSQ was investigated. Secondly, with hierarchical clustering, three humor types were identified. Lastly, in analyses of variance, the associations between humor types, selfregulatory strategies, self-esteem, and well-being were examined.
Data Collection
Data collection took place within the work on the diploma thesis of the second author. Due to time limitations, the convenience sample was acquired through e-mail distribution of a link to the online questionnaire. 1 A total of 348 individuals participated in the study. Data of three persons were eliminated because of too short duration of processing the questionnaire and another three persons because of missing data. So, the data of N = 342 participants could be used. Reported age ranged from 15 to 73 years (M = 28.35 years, SD = 10.52). However, after data screening and due to the nonnormal distribution of the age variable (see below), data of N = 305 individuals with an age range of 15-40 years were retained for exploratory and confirmatory factor analyses. Slightly different sample sizes are due to some randomly missing data in the self-regulation inventory (Brandtstädter and Renner 1990, see below). A total of 218 participants were female (71.6 %). When asked for relationship status, 44.2 % reported to be in a relationship, 38.3 % to be single, 17.3 % to be married, and 0.3 % were divorced. In Table 1, age range and relationship status are presented for men and women.
Participants were asked to report their professional status or their profession, respectively. Unfortunately, no information about educational status was available. Therefore, information on profession was recoded with regard to jobs requiring a graduate degree or not. According to this information, more than half of the sample were students (n = 193, 56.4 %) and n = 83 (24.3 %) were employed in jobs without graduate degree. A total of 40 participants (11.7 %) worked in jobs requiring a university degree, n = 11 (3.2 %) were students in a secondary school (comparable to a US high school or college), n = 6 (1.8 %) were retired, and n = 4 (1.2 %) in an apprenticeship.
Self-Esteem
The revised German version of the Rosenberg Self-Esteem Scale was used (Ferring and Filipp 1996;Von Collani and Herzberg 2003). An example item is ''All in all, I am satisfied with myself''. The widely used scale contains 10 items with a four-point response format and the anchors not at all true and completely true (M = 3.17, SD = 0.54, a = .90).
Self-Regulatory Strategies
We used an instrument developed by Brandtstädter and Renner (1990) to assess selfregulative strategies in German language. We administered the scales tenacious goal pursuit (TEN) and flexible goal adjustment (FLEX), each containing 15 items. Tenacious goal pursuit reflects an assimilative tendency to ''adjust developmental situations to personal preferences'' (Brandtstädter and Renner 1990, p. 64). An example item for tenacious goal pursuit is ''The harder a goal is to achieve, the more desirable it often appears to me'' (M = 3.42, SD = 0.58, a = .85). Flexible goal adjustment is an accommodative tendency and measures the extent to which one adjusts personal preferences to situational constraints. An example item for flexible goal adjustment is ''I can adapt quite easily to changes in a situation'' (M = 3.37, SD = 0.57, a = .85). Respondents were asked to rate their agreement with the items on a five-point rating scale with the end points not at all true and completely true.
Well-Being
The widely used Satisfaction with Life Scale (Pavot and Diener 1993) was administered. The scale contains five items with a seven-point rating scale with the anchors totally disagree and totally agree (M = 4.87, SD = 1.37, a = .86). An example item is ''I am satisfied with my life''. All measures were presented with a response format providing only the scale anchors due to formatting issues. Descriptives, internal consistencies, and interrelations of self-regulatory strategies, self-esteem and well-being are also presented in Table 2. Interrelations were highly significant, which is to be expected regarding the already mentioned relations between the constructs.
Humor Styles
We administered a German version of the Humor Styles Questionnaire, translated by the second author. The translation was checked by a native English speaker. The Humor Styles Questionnaire contains 32 items, each of the four scales consists of 8 items (see Martin et al. 2003, for the original scales; see Müller 2009, for the German version). Respondents rated their agreement with the items on a seven-point rating scale (totally disagree-totally agree).
Strategy of Data Analysis
Data screening revealed a nonnormal distribution of the age variable since only 10 % of the sample was aged over 40 years (n =37). With respect to interpretability of results, a transformation of this variable was considered to be contraindicated. The exploratory and confirmatory factor analyses were conducted for the whole sample. However, to exclude age as a possible confound and to acknowledge potential developmental changes in humor styles and associations with well-being over the life course, we used a subsample of participants aged up to 40 years for cluster and variance analyses. In this subsample, age was normally distributed. This subsample consisted of N = 305 participants aged 15-40 years (M = 25.17, SD = 4.60). Supporting the assumption of age-differences over the life course and the strategy of investigating a less age-heterogeneous sample, differences between retained and screened sample emerged in the way that affiliative humor scores were higher, whereas scores of aggressive humor were lower in the screened sample (both p values \ .001). Other differences did not get significant. Age-related differences in humor styles will be discussed below. Firstly, to examine the underlying structure of the Humor Styles Questionnaire, exploratory and confirmatory factor analyses with the whole sample were performed. Secondly, differential constellations of humor styles, so called humor types, were investigated. Therefore, in order to find different humor types, hierarchical clustering was performed.
Lastly, to answer the research question on whether humor types differ in self-regulatory strategies, self-esteem, or well-being, analyses of variance (ANOVAs) and post hoc tests were performed. Analyses were carried out with Amos 17.0 and PASW (former SPSS) 18.0.
Factor Structure of the German HSQ
To ensure validity of the HSQ, we aimed at comparing the underlying factor structure of the HSQ in our German sample with Martin et al.'s (2003) findings. Assumptions for data analysis with exploratory and confirmatory factor analyses were met. The 32 items of the German version of the HSQ were factor analyzed with exploratory factor analysis (principal axis factoring and varimax rotation). Firstly, after running an exploratory factor analysis, a factor solution emerged that was highly consistent with Martin et al.'s (2003) findings. As in the Canadian sample, according to the scree plot a four factor solution seemed optimal that explained 45.8 % of the total variance. The first four initial eigenvalues were 6.18, 3.84, 2.75, and 1.89 (the next three eigenvalues were 1.3, 1.2, and 1.0). Items loaded on the same factors as presented in Martin et al.'s (2003) analyses. Some, mainly minor, differences in factor loadings were found: For items constituting the selfenhancing scale, item 30 (''I don't need to be with other people to feel amused-I can usually find things to laugh even when I'm by myself'') had a significant factor loading in the original but not in the German sample (a G = .16 vs. a or = .58). For the scale aggressive, items 11 and 19 (''When telling jokes or saying funny things, I am usually not very concerned about how other people are taking it'' and ''Sometimes I think of something that is so funny that I can't stop myself from saying it, even if it is not appropriate for the situation'') showed slightly lower factor loadings in the German sample (a G = .33 vs. a or = .53; and a G = .26 vs. a or = .48, respectively). In fact, item 19 had a slightly higher loading on the factor constituted by items of affiliative humor (a G = .34). However, items of the scale self-defeating and affiliative were all similar in value and significance. Additionally, a confirmatory factor analysis was carried out that tested the model of Martin et al. (2003). Hu and Bentler (1999)
Descriptive Analyses
The humor scales affiliative, self-enhancing, aggressive, and self-defeating were constructed according to the original HSQ scales presented in Martin et al. (2003). Internal consistencies of the German HSQ scales were moderate to good and comparable to findings of Martin et al. (2003). Participants scored highest on affiliative humor (M = 5.87, SD = 0.78; a = .78), followed by self-enhancing humor (M = 4.60, SDSD = 1.04; a = .83), and aggressive humor (M = 4.04, SD = 0.94; a = .74). The lowest scores were found on self-defeating humor (M = 3.39, SD = 1.10; a = .84). Most correlations between the German HSQ scales were quite similar to the original HSQ scales in absolute value and significance level (see Table 2). However, two notable differences in the correlation pattern emerged: While the original affiliative and self-defeating scales were not correlated, we found a small association of the scales in the German sample (r = .13, p \ .05). Similarly, self-enhancing and self-defeating humor did not correlate in the original study, whereas in the German sample, we found a quite remarkable association (r = .18, p \ .01). Descriptive statistics are reported in Table 3.
Cluster Analysis
In a next step, it was examined with cluster analysis whether different patterns of humor styles could be differentiated and could be thus regrouped into humor types. We z-standardized the humor style scales to facilitate interpretation of findings. These z-scores were entered as grouping variables in a hierarchical cluster analysis with squared Euclidean distance and Ward's algorithm. The three-cluster solution proved to be most stable when running analyses with the whole and the screened sample and to be most compelling in both parsimony and interpretability of the clusters. The first cluster was characterized by an above average amount of all four humor styles (humor endorsers, N = 134, 43.9 %), the second cluster had below average scores in all humor styles, especially very low selfenhancing humor (humor deniers, N = 109, 35.7 %), and the third cluster was characterized by slightly above average affiliative humor, highly above average self-enhancing humor, and below average aggressive and self-defeating humor (self-enhancers, N = 62, 20.3 %). Z-standardized values for each of the three clusters are presented in Table 4 and Figure 1. We validated the three-cluster solution with k-means clustering. In a second cross-validation procedure, we divided the sample at random in two subsamples with Table 3 Descriptive statistics, internal consistencies, and interrelations of the german HSQ humor scales and zero-order correlations with well-being measures approximate 50 % of the cases and conducted additional hierarchical cluster analyses in the subsamples. 2 In both subsamples (n 1 = 147 and n 2 = 158), a highly similar threecluster solution emerged, which also mirrored the cluster results in the total sample. In this solution, the first cluster showed above average levels in all humor styles (n 1 = 52 and n 2 = 82), a second cluster showed below average levels in all humor styles (n 1 = 57 and n 2 = 52) and a third cluster showed slightly above average affiliative humor, highly above average self-enhancing humor and below average self-defeating and aggressive humor (n 1 = 38 and n 2 = 24). One can fairly conclude by these results that the initial cluster solution proved useful in the cross-validation procedure and we used this solution for further analyses. We checked for age and gender differences in the three clusters. The clusters did not differ in age of the individuals, F(2, 302) = 2.05, p = .13; nor in gender distribution between clusters, Cramèr's V = .13, p = .09.
Correlates of Humor Types
In a next step, associations between humor styles, self-regulatory strategies, self-esteem, and well-being were investigated. Zero-order correlations of the humor scales, self-regulatory strategies, self-esteem, and well-being are presented in Table 3. Whereas affiliative and self-enhancing humor were highly positively correlated with self-regulatory strategies, life satisfaction, and well-being, aggressive humor showed null correlations with all measures but a small negative correlation with flexible goal adjustment. Also, selfdefeating humor was not related with flexible goal adjustment and life satisfaction. Finally, to examine associations of humor clusters, the so called humor types, with the well-being variables, self-esteem, tenacious goal pursuit, flexible goal adjustment, and life satisfaction were tested for mean differences between the three clusters (see Ferring et al. 2009, for an example using a comparable methodological approach). We performed univariate analyses of variance (ANOVAs) for each dependent variable (DV), with apportionment of a = .01. Post hoc tests were carried out; presented here are results of the Bonferroni test that corrects error probability for the number of compared groups. Since the DVs assess related constructs with significantly overlapping variance (see correlations in Table 2), for the post hoc tests alpha was also set to .01. For self-esteem, tenacious goal pursuit, and flexible goal adjustment, homogeneity of error variances was given. Since error variances for life satisfaction were unequal (Levene's Test, p \ .05), a robust Welch test was performed. This test does not require variance homogeneity. Since there were no differences in error probabilities between ANOVA and Welch test, only the results of the ANOVAs are presented here. To test for cluster differences in life satisfaction, the post hoc test Tamhane's T2 test was carried out, which also does not require variance homogeneity. The results concerning self-esteem, self-regulatory strategies, and life satisfaction will be presented separately.
We found significant group differences in self-esteem, F(2, 302) = 15.75, p \ .001. Self-enhancers and endorsers as well as self-enhancers and humor deniers types differed significantly (both p values \ .01), in the sense that endorsers had an average score on selfesteem, humor deniers a below average score, and self-enhancers an above average score. The difference between humor endorsers and humor deniers did not quite reach significance (p = .01). Humor types also showed differences on tenacious goal pursuit, F(2, 302) = 8.42, p \ .001. Only humor deniers and self-enhancers differed significantly (p \ .01), whereas the other comparisons did not reach significance (p = .04). Like on self-esteem, endorsers had average scores, humor deniers below average, and selfenhancers above average on tenacious goal pursuit. Considering flexible goal adjustment, humor types differed significantly, F(2, 302) = 18.26, p \ .001. While humor endorsers and self-enhancers did not show differences in flexible goal adjustment (p = .63), the other comparisons were significant (p \ .001): Humor deniers scored below average, humor endorsers on average, and self-enhancers had the highest values on flexible goal adjustment. Lastly, considering life satisfaction, humor types differed significantly in amount of life satisfaction, F(2, 302) = 11.53, p \ .001. In particular, humor deniers and humor endorsers as well as self-enhancers differed significantly (both p values \ .01), whereas humor endorsers and self-enhancers did not differ in amount of life satisfaction (p = .20). The z-standardized values of the DVs self-esteem, tenacious goal pursuit, flexible goal adjustment, and life satisfaction for each cluster are presented in Table 5 and plotted in Fig. 2.
In an attempt to quantify the effect sizes of variance in the well-being measures explained by the humor types compared with humor styles, we calculated R from several Humor Types and Well-Being 561 ANOVAs with humor type (presented above) and the dichotomized humor scales, respectively, as factors. As Table 6 shows, aside from self-enhancing humor (low versus high) explaining a larger amount of variance in flexible goal adjustment than humor type Effect sizes (R) derived by dividing sum of squares between groups by total sum of squares. TEN tenacious goal pursuit, FLEX flexible goal adjustment (R = 0.089 compared to R = 0.054); all other comparisons show a larger effect size of the associations of humor type with the well-being measures compared to humor styles, showing thus an advantage of investigating humor types instead of single humor scales.
Factor Structure of the HSQ and Equivalence of the German and the Original HSQ
Firstly, to ensure validity of the findings gained with the HSQ in a German sample, findings of Martin et al. (2003) concerning the factor structure of the Humor Styles Questionnaire were compared and replicated in an age-heterogeneous sample in adolescence, and young and middle adulthood. The adequacy of the German and English version of the HSQ was confirmed with exploratory and confirmatory factor analyses, showing that fit indices were only slightly different from Martin et al.'s (2003) model. One difference emerged in the fact that the self-defeating humor was to a small extent correlated with both affiliative and selfenhancing humor in the German sample, but did not correlate in the original sample. Since the exploratory factor analytic results are highly comparable to earlier studies on the HSQ and the correlations are rather small, it cannot totally be excluded that the different correlation patterns might represent a sample-specific finding. It might even represent methodspecific common variance induced by similar wording, although the translation process had been carried out carefully to prevent similar wording. Having ruled out alternative explanations, it might also be the case that the differences in the correlations of the humor styles reflect cultural differences between the original Canadian and the German sample. It has been shown that humor styles differ between cultures, however these investigations drew comparisons between Western and Eastern and Western and Arabic societies, respectively (e.g., Chen and Martin 2007;Kalliny et al. 2006). Therefore, with both studies relying on a sample of individuals of a Western society, one might suggest only small differences between the Canadian and the German sample. A possible explanation is that self-defeating humor is a common way of interaction for younger Germans; it might be a culture-specific expectation in Germany to be able to laugh or make jokes about oneself. Self-defeating humor might thus not necessarily be maladaptive in Germans as suggested by Martin et al. (2003). However, to our knowledge, specific differences in humor understanding between Canadians and Germans have not been investigated yet and would desire a qualitative study investigating the meaning and reception of self-defeating humor in both cultures.
Humor Types and Well-Being
To regroup humor styles into the broader constellation of humor styles, namely humor types, a cluster analytic procedure was carried out. We found one type endorsing humor, one type refusing to use humor, and one type using humor to enhance the self. Humor styles were only to a small extent related to well-being. This points to the fact that humor scales are not per se detrimental or beneficial, but have to be investigated within the context of other humor styles and, perhaps, their situational dependency. Humor types were related with well-being measures in a more coherent way and seemed more easy to interpret than the associations of humor styles, which might once again justify the methodological approach used in this study. Nonetheless, the results have to be regared with Humor Types and Well-Being 563 caution, since the sample was composed to more than two thirds by women and one can thus only carefully draw broad conclusions. The humor type-well-being relationships shall be described next before drawing further inferences. The first humor type, ''humor endorsers'', showed across all humor scales high scores (i.e., above average), which characterized a third of the sample (35.7 %). This might reflect cheerfulness and generalized behavior patterns to make jokes, see ''the funny side of life'' or to not take life too seriously. On the other hand, it might characterize a behavior pattern that uses humor carelessly or without further reflection, to use jokes and funny remarks with means that might even be harmful for oneself or another person. This possible explanation was validated by the analyses of variance: Humor endorsers showed average levels of self-regulatory strategies, self-esteem, and well-being. Even though the cross-sectional associations must not be interpreted in a causal way, one might infer that high levels of humor endorsers are not especially beneficial for well-being. This finding particularly advances the understanding of earlier mono-dimensional results of relations between humor styles and well-being (e.g., Martin et al. 2003): Despite high levels of selfenhancement, the well-being pattern for endorsers is not particularly adaptive.
The second humor type, ''humor deniers'', showed below average levels of humor styles, but especially low self-enhancing humor. This pattern reflects a behavior pattern that humor is seldom used to cheer oneself up. The humor deniers showed lowest levels of self-regulatory strategies, self-esteem, and well-being, implicating that this humor type is not beneficial for well-being either.
Thirdly, the humor type ''self-enhancers'' was characterized by below average aggressive and self-defeating humor, average affiliative humor, and clearly above average self-enhancing humor. This reflects a humor type that focuses on humor to make oneself feel better even when not in the company of others. Analyses of variance showed impressively that this type might be most adaptive: On all measures of self-regulatory strategies, self-esteem, and well-being, self-enhancers scored highest. This finding showed that a self-enhancing humor style might be most beneficial for well-being and underlines the adaptive correlates of self-defeating and aggressive humor being absent.
At this point, the findings shall be compared to the original intent to construct beneficial and detrimental humor styles in the HSQ (Martin et al. 2003). It seems noteworthy that the functional distinction between beneficial and detrimental humor is not reflected in the cluster analytic results and we address this issue with three arguments. Firstly, by replicating the factor structure of the HSQ in the German sample, we can fairly rule out the assumption that failures in translation of the questionnaire might have caused these findings. Secondly, these results confirm our initial assumption that humor styles cannot be considered as beneficial or detrimental per se as these might be expressions of different underlying motivational strivings. Thirdly, in conceptualizing humor styles, it has been widely neglected that humor styles are to a certain extent context dependent. It might be opportune to use self-defeating humor in the working context, aggressive humor with friends, and affiliative humor in the family. These assumptions might explain why some individuals, the so-called humor endorsers, apparently use humor in an undifferentiated way-they might just adapt their humor styles according to the context. This conclusion however can only be drawn cautiously, since subjective justifications for using humor or context-dependent variations or stability of the humor styles, respectively, have not been assessed. Further studies should evaluate on the context dependency of humor styles to shed more light to the-apparently complex-associations of humor styles and well-being. In addition, studies not limited by a cross-sectional sample might also investigate the longitudinal associations between humor types and well-being; it might be the case that self-regulatory strategies act as mediators in the humor-well-being relationship. Another open issue for further consideration is the inclusion of indicators of negative well-being, like depression or anxiety, and investigate their associations with humor types.
In a further attempt to clarify the complex associations between humor styles and wellbeing, it is noteworthy to distance from the functional implications of humor styles, namely, their pre-defined associations with well-being. Instead, we looked into the item contents to elaborate on the structural components of the humor styles. 3 First, the items constituting ''affiliative humor'' deal with the outcome of humorous behavior; humor is presented as a means to ''make other people laugh''. Also, five of the eight constituting items describe behaviors of affiliative humor being absent (reversely coded) and couldtheoretically-be affirmed by persons who are equipped with no humor style whatsoever. Compared with other concepts of humor, the concept of affiliative humor resembles closest to humor as strength of character (Peterson et al. 2005) or as moral value, belonging to the core virtue 'transcendence' (humor as liking to laugh and joke and bringing smiles to other people; Ruch et al. 2010b). Second, items constituting the scale ''self-enhancing humor'' deal-without exception-with the antecedents of humor, namely a feeling or state of being depressed, sad, alone, or upset. Humor, in this line of reasoning, is described as a means to cope. In sum, both humor styles resemble constructs that have been shown to be positively related with well-being. Third, items constituting the scale ''aggressive humor'' describe antisocial behaviors like teasing, offending, or criticizing someone with humor to enhance oneself. Again, four of the eight items are negatively formulated and could be affirmed by persons who do not see themselves as humorous persons. Lastly, the scale ''self-defeating humor'' is constituted by a total of eight items, of which five items describe the means ''putting'' oneself ''down''. Also, the outcomes of humor are described, namely, to make people, friends, and family laugh, like or accept the individual, and to keep one's ''friends and family in good spirits''. Thus, one can fairly say that self-defeating humor shows affiliative, morally valued components. In sum, on a structural level, the humor scales might contain multiple meanings. Consequently, instead of a functional distinction in beneficial and detrimental humor styles, a typological approach investigating humor types proved useful in our study. Thus, the study shows that examinations of bivariate associations considering humor styles and well-being do not show the ''full picture'' of the multifold associations between these constructs. This was also shown by comparing effect sizes of associations of humor types with well-being measures compared to the associations of humor scales. Aside from one advantage of humor scales (low versus high scores on self-enhancing humor contributing more to flexible goal adjustment than humor type), all other comparisons could be interpreted in favor of using humor types instead of humor scales. However, the importance of this quantification should not be overestimated, since the advantage of using humor types clearly lies in interpreting the contribution of humor styles in the context of the other styles. For instance, self-enhancing humor is present in both endorsers and self-enhancers, but the adaptive value of this humor style is only reflected in the absence of the maladaptive styles.
In addition, it would now be interesting to examine the practical implications of these humor types. With an experimental test, for example by presenting different scenarios in vignettes, one could clarify if humor types differ in their social and self-related responses to these scenarios. 4 Another open research question would of course be the longitudinal associations between humor types and well-being, and the attempt to examine if self-enhancing humor is really ''beneficial'' in the causal sense, or if one or several third variables, for example extraversion, emotional stability, or optimism, cause the associations between self-enhancing humor and well-being.
Age Differences
We did not assume age differences due to the sample in adolescence and young adulthood, although age differences might occur at a later stage in life, when humor styles or humor types might be dependent upon the time horizon of the individual. In line with this reasoning, we did not find any age differences between the clusters. However, Martin et al.'s (2003) findings suggest age differences in humor styles. This may be due to the fact that age differences can only be found on the scale level, but not on the cluster level. In fact, on the scale level, a further analysis regarding the whole sample revealed that age was negatively related to affiliative humor (r = -.30, p \ .001) and aggressive humor (r = -.26, p \ .001), but not to the other two humor scales. Also, the participants older than 40 years who had been screened from the sample due to unequal age distribution showed lower aggressive and higher affiliative scores compared to the younger participants. Thus, in general, older participants used humor less to enhance relationships with others or to devaluate others with humor, which is consistent with Martin et al.'s (2003) findings. Further studies with a larger older sample should be used to replicate this finding.
Further Considerations
Parallel to our study, Galloway has been investigating the combination of humor styles and their relations to the Big Five traits of personality characteristics. Applying a clustering method similar to our study, Galloway (2010) found four clusters with (1) above average scores on all humor styles, (2) below average scores on all humor styles, (3) above average on the positive styles and below average on the negative styles, and (4) above average on the negative styles and below average on the positive styles. In our study, two of the four clusters were comparable to those that had been found in the Australian sample by Galloway (2010), namely, the humor endorsers of cluster No. 1 and the humor deniers of cluster No. 2. Also, we found a cluster that showed high self-enhancing humor, average affiliative humor, and below-average negative humor styles. This pattern replicates the cluster solution of Galloway to a large extent. Considering the fact that the samples of the studies were both to a large part university students with a similar mean age, why did our study not totally replicate the findings of Galloway (2010)? Letting methodological caveats considering the translated questionnaire in our study aside, we cannot totally rule out crosscultural differences in interpreting the HSQ items. Concerning methodological arguments however, in our data, a fourth cluster solution was not stable and could not be crossvalidated. Therefore, we decided in favor of a three-cluster solution. This solution could both be cross-validated with another clustering method and in two randomly splitted subsamples.
In addition, Galloway (2010) examined associations of the four clusters with selfesteem. With the different number of clusters, the results are not totally comparable with our data. However, both studies showed that clusters low on affiliative and self-enhancing humor styles showing below-average self-esteem, whereas clusters high on these styles showing above-average self-esteem. Of the analyses both provided in this and Galloway's (2010) contribution, it can be concluded that especially the combination of self-enhancing and affiliative humor styles is related with self-esteem.
Conclusions
In a German sample of young and middle aged adults, we found different humor types of the Humor Styles Questionnaire (Martin et al. 2003) via cluster analysis extending current bivariate approaches to the study of humor and well-being. Humor types were differentially associated with self-regulatory strategies, self-esteem, and well-being. Self-enhancers, characterized by high self-enhancing humor, mean affiliative, low aggressive, and low self-defeating humor showed most favorable associations with quality of life and wellbeing measures. In sum, these findings gained with a typological approach provide further evidence for self-enhancing humor as important resource for well-being, and especially underline the benefits when self-defeating and aggressive humor are absent. | 2022-12-16T14:16:03.377Z | 2012-05-12T00:00:00.000 | {
"year": 2012,
"sha1": "d06f7fc1e7af830cd9a0d56f44e35dc98107ebce",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10902-012-9342-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "d06f7fc1e7af830cd9a0d56f44e35dc98107ebce",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
235289408 | pes2o/s2orc | v3-fos-license | Full Loop Numerical Simulation of Fluidization Characteristics at Cyclone and Furnace of 110 MW CFB
Circulating fluidized bed (CFB) boiler has strong points with its recognition when operating with low quality coal contrast to pulverized coal boiler. Boiler load could be improved by increasing coal input and air ratio combustion adjustment. That is why ratio combustion between secondary and primary air is one of valuable parameters influencing fluidization of CFB Boiler. CFB boiler is consist mainly of a furnace, two cyclones, and forced loop pipes, where cyclones are boiler parts that were indicated to have high velocity fluidization up to 30 m/s. Full loop 3D simulation of CFB with focusing on the furnace and cyclone by using computational fluid dynamic (CFD) program was implemented using Eulerian multiphase model to investigate sand volume of fraction, air, and sand velocity including pressure distribution around Cyclone. Several air ratio combustions between primary and secondary air were simulated such as 55%-45%, 50%-50% with 63%, and 100% load variations. It was shown that operation with air combustion ratio 50%-50% and 55%-45% leads to good fluidization at 63% load, while 100% and 110% fluidization will cause an abundant sand entered to inlet cyclone and induce higher sands and air velocity.
Introduction
Circulating fluidized bed (CFB) boiler is one of implementation of the fluidized bed boiler technology. It has acquired recognition, especially in the industrial power-generation users, for its various useful purposes, such as ability to operate with low rank coal combustion and its smaller consequences on the environment where it has low NOx emission. Implementation on the bench trial, prototyping on a commercial scale to get the CFB combustor design with high efficiency and low emissions actually is very costly [1]. Until now, the application of these experiments is still performed, while interest in other approaches through numerical simulations is getting bigger along with the fast development of computing technology including computational fluid dynamics or it was known as computational fluid dynamic (CFD) [2].
Fluidization occurred because solid particles were in a state of suspension through the liquid or gas or when the velocity of the fluid or gas velocity was getting higher. But if there are solid particles in which an air or gas fluid is passed through it at low speed, the solid particle remains undisturbed and fluidization does not take place [3]. Eulerian multiphase and k-standard for turbulence models were employed in one of research regarding fluidization. It was showed that k-ɛ more accurate and precise that maximum air velocity happened in areas far from wall where air velocity negative [4]. In largescale CFB simulations the Euler Simulation is commonly implemented. In the two-fluid model or IOP Publishing doi:10.1088/1757-899X/1096/1/012129 2 often called the Eulerian, the gas and the granular phase were considered to be continually fully penetrating. In this case, generalizing the Navier-Stokes Equation for interacting media was the equation that was implemented [5].
The Study examined air area and fluidization of sands could reach up to 30 m/ and damage to the cyclones surrounding has been done. It was found that abrasion arising on the walls of the cyclone by both air or sand potentially happened [6]. But until now, complete understanding of the effect of operating parameters on the mechanism of mixing gas-solid particles is not fully understood. Problems due to insufficient fluidization still arise such as agglomeration which then leads to de-fluidization including the appearance of abrasion on cyclones. With proper operating parameters, such as primary and secondary air distribution that are well understood, it is expected that good fluidization can be achieved. Therefore, proper study regarding primary and secondary air distribution in cyclones is really needed. There were several numerical simulations related to those interests, as one study about 30 MW CFB Boiler that was investigated with various effects of five different primary and secondary air ratios with full load only [7] [8]. Other study used 3D model with 406 Mwe but without load and air ratios variations [9] . 3D model simulation for boiler implemented too in supercritical type, but it only covered furnace area [10]. There was a study that concluded higher erosion rate was influenced by higher fluidizing air velocity [8]. Simulations with various primary and secondary air ratios and 3 different loads were conducted [9]. But it was focused only on the furnace and did not explain anything deeper related to cyclone.
In this simulation, Commercial CFD were used to study the effect of various primary and secondary air ratios with several load variations on fluidization characteristics on cyclone areas. The boiler was modelled as isothermal without a combustion process.
Method
This simulation was conveyed using commercial software started from pre-processing, processing, and post-processing processes. At the pre-processing stage, there were several steps that were carried out, such as: boiler test object modelling, meshing in the domain and determining the boundary conditions and parameters.
At the processing stage, the meshing and domain results in numerical simulations were exported to the solver. Some of the arrangements were made including models, materials, boundary conditions, operating conditions, control, and monitoring conditions, and initialize conditions. Post-processing is the graphical appearance of the results and analysis. It was obtained in the form of qualitative data and quantitative data. Quantitative data in the form of pressure and speed distribution. While qualitative data in the form of flow visualization by displaying path lines, contour plots, and velocity profiles of gas-particle flow with variations in flow velocity then the results were analyzed and compared.
Governing equation
Gas-solid flow in a fluidized bed was based on the conservation equations of continuity and momentum. So that, in this study the Eulerian multiphase model was executed. Turbulence kinetic energy (k) and its dissipation rate (ɛ) were the originate model of the standard k-ɛ model. Below were the transport equations based on these model [10] : Wherein the generation of turbulence kinetic energy related to the gradients of mean velocity is considered to be , the generation of turbulence kinetic energy related to buoyancy is considered to IOP Publishing doi:10.1088/1757-899X/1096/1/012129 3 be by , the contribution of the fluctuating dilatation in compressible turbulence to the overall dissipation rate is considered to be . Meanwhile 1 , 2 , and 3 are calculated as constants. In this study user-defined source terms where represented by and were neglected. Governing equations aforementioned were implemented in a common CFD program.
Geometry and mesh
In this simulation ,it used Wuxi huaguang CFB Boiler which was placed in Nagan, Aceh, Indonesia. This natural-circulation CFB boiler load capacity is 110 MW with its steam generation reaching 382 ton/h. For the main part formed by a furnace, two cyclone separators and Forced loop pipes. The furnace geometry is 3.2m x 14.4m x 36.3m. In this study a Full loop boiler with cyclone and loop seal is simulated without a bypass section. Figure 1 represents simulation domain. It covered 4 rectangular inlets of coal located in front of the furnace as mass flow inlet. There were 9 inlet pipes of secondary air located in front side of the furnace. On the rear side there were another 12 pipes of secondary air inlets. Velocity inlet as boundary conditions were used for secondary air. For simplicity, the primary air entering is assumed to be completely through the bottom of the furnace. This assumption was also used in HPFF (High Pressure Fluidizing air) sections [2]. It used mass flow inlet boundary conditions; Pressure outlet boundary condition was only applied on cyclone outlet. All these settings were resumed on Table 3.
For boiler meshing, it was generally meshed with hexahedron which had previously been divided into 64 volumes. Several parts such as lower cyclones and loop pipes were used tetrahedron for its meshing. It was generated from 445.812 nodes or 497.311 elements where all meshes size scale is below 0.2 m with relative centre and smoothing medium.
Simulation setting
Sutherland formulation was used to calculate ideal gas formulation and air viscosity [10]. It was summarized on Table 1. As a note the boiler was assumed to operate at constant operational temperature. In the beginning of simulation, sands were patched in with volume of fraction 0.4 and height 2.3m. Table 3 summarized the setting for sands phase properties with reference from Zhang et al [2]. Meanwhile several data were referred to Sudarmanta et al such as density, diameter, and viscosity of sands [11]. The eulerian model was used for the multiphase model to define gas-solid phase and its interactions. Because of its general applicability, robustness, and efficient, the turbulence model used in this simulation was standard k-ε [8]. Only two phases of setting were used, and Coal inlet was assumed as ideal gas. As mentioned earlier, this combustion model was assumed as isothermal.
CFD Solver
When it came to solution or solver, for pressure-velocity coupling, the Phase Coupled Semi-implicit method for pressure linked equation (PC-SIMPLE) was used [12]. For the momentum, volume fraction, turbulence kinetic energy, and turbulence dissipation rate were solved by first order upwind.
Superficial velocity
Term of Superficial velocity means air velocity only. It has impacted the fluidization of sands caused by higher superficial velocity. Ari et al concluded that greater primary air contributed to superficial velocity more than 10 m/s around central x axes of furnace t=50s at every load variations with larger area [9]. It showed contour of superficial velocity without showing specific values on the furnace superficial velocity. It concluded that superficial velocity would increase when load or primary air were increased. In this study, the air velocity distribution was analyzed from the z-axis (upward furnace) airspeed plot to the distance from the center point of the furnace at a height of 1 meter above the bottom furnace. Data collection at a height of 1 meter above the bottom furnace was carried out because in this area there was a dense bed so that it can be used to find out how the distribution and value of superficial air velocity where superficial air was the air velocity used for contact with sand particles. Figure 2a shows the velocity plot in the direction of z (top) where the velocity at each load did not exceed 3 m / s and is generally at 2 m / s which shows the air velocity entering the turbulent transition (1.76 m / s). The plot in figure 2b also shows the same tendency where the velocity towards z did not exceed 3 m / s. So, it can be concluded that the load addition which in this case was the addition of combustion air capacity did not directly affect the superficial air velocity in the bottom furnace. This is slightly different from Wijayanto et al which concluded that the addition of primary air increased superficial speed [11]. This difference could be caused by several things, including the difference in the ratio of air that is not too far away at the same load or is possible due to the simplification of the primary air nozzle where air was considered to flow entirely from the bottom of the furnace without the nozzle. Figure 2a and 2b also showed that there was a very high negative velocity at ± 2 meters from the central furnace. This was possible because the pickup point was carried out at an elevation above one meter, where sand material, primary air, secondary air and including returning air from the cyclone were mixed. Research showed that vertical velocity can occur up to 8 m/s [1].
. Furnace Pressure
The only parameter could be extracted from boiler operational data is pressure. The other parameters such air superficial velocity and sands fraction were not practical to be extracted in the operation of the boiler. It was shown on Figure 3 that pressure drop of static pressure data for each variation of load and air combustion ratio at center x axes of furnace t=50 along furnace height. Pressure input at t=0 for each variation was not the same , but the final pressure on the top of the furnace had a tendency
Chart of fine solids volume of fraction
These sub-chapters have results for the distribution of volume fractions of sand that are relatively smaller or more commonly called fine particles. Fine particles were different from dense beds which tend to only fluctuate in the lower furnace, which was useful for the process of fluidization. Fine particles tend to be smaller and not lumpy like dense beds. The number of fine particles reaching into the cyclone could cause destructive abrasion in the area. It was necessary to adjust the value of a very small volume fraction range so that the resulting contour is more informative for the analysis of the distribution of sand particles (fine particles) In this simulation, a range of 0 to 0.05 was selected to see areas with high sand volume fractions. Kinkar et al research was focusing on cyclone areas and it needed to be figured out deeply [6]. So that it was compared in this simulation, where data extraction was carried out at the midpoint of the right cyclone along the 2.4 meter in accordance with Figure 4. It has the purpose to find out the value of the sand volume fraction on those specific areas. Based on data from Figure 5, it was concluded that the higher the load, the higher the volume fraction that entered to the cyclone, as well as with the addition of primary air indicated the higher sand volume fraction appeared. At 70 MW 50-50 showed a very small fraction of sand volume.
Sands velocity
To get good quantitative data, extraction of the data into a graph to get the x and y direction velocity vector at the cyclone inlet in accordance with the target area in Figure 4 has been done. So that the sand velocity obtained in the X and Y direction as shown in Figure 6 and 7 which showed the speed of sand in the x direction to enter the Cyclone to reach 16.7 m / s while the y direction reached 4.98 at a load of 120 MW 55-45.The conclusion from both graphs is that the increase in load and increase in primary air caused sand velocity to increase.
Conclusion
Full loop simulation used CFD gave so much information related to fluidization characteristics that cannot be done through experiments especially when micro study regarding boiler part is needed such as furnace or cyclone area. This numeric simulation has been done at two load variations and three alternative air combustion ratios. The outcomes were shown in the chart of superficial velocity, furnace pressure, fine solids volume of sands fraction and including sand velocity as parameters that affected fluidization. This study has same direction of Ari et al conclusion that operation with air combustion ratio 50%-50% and 55%-45% leads to good fluidization at 63% load, meanwhile fluidization 100% and 110% with all those air combustion ratios will cause a great number of sands entered inlet cyclone also higher sands and air velocities. All these results are essential for power plant engineer's knowledge of fluidization of CFB boiler components. | 2021-06-02T23:42:47.881Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "727d99018619569e52aa38d769294e555cb6a216",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1096/1/012129",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "727d99018619569e52aa38d769294e555cb6a216",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
119246962 | pes2o/s2orc | v3-fos-license | Cooperative effects in one-dimensional random atomic gases: Absence of single atom limit
We study superradiance in a one-dimensional geometry, where N>>1 atoms are randomly distributed along a line. We present an analytic calculation of the photon escape rates based on the diagonalization of the N x N coupling matrix Uij = cos xij, where xij is the dimensionless random distance between any two atoms. We show that unlike a three-dimensional geometry, for a one- dimensional atomic gas the single-atom limit is never reached and the photon is always localized within the atomic ensemble. This localization originates from long-range cooperative effects and not from disorder as expected on the basis of the theory of Anderson localization.
I. INTRODUCTION
Cooperative effects such as superradiance and subradiance [1,2] originate from indirect interactions between atoms through the radiation field. These effects show up as a multi-atomic coherent emission which is qualitatively different from that of a single atom [3,4]. Cooperative effects have been studied both theoretically and experimentally in various systems, such as quantum dots [5], Bose-Einstein condensate [6,7], cold atoms [8,9] and Rydberg gases [10].
In the context of cold atoms, photon localization, which occurs as a decrease of photon escape rates from disordered media, has been investigated [11]. It has been shown that for a three-dimensional atomic system, photon localization is primarily determined by cooperative effects rather than by disorder. Moreover, localization shows up as a crossover between delocalized and localized photons and not as a disorder-driven phase transition as for Anderson localization.
In this Letter we study photon escape rates from a disordered one-dimensional atomic gas and compare them to those obtained in a three-dimensional geometry. We will show that unlike a three-dimensional geometry, for a one-dimensional atomic gas the single atom limit is never reached and the photons are always localized. This localization stems only from long-range cooperative effects and not from disorder as expected from the theory of Anderson localization.
II. MODEL
We are interested in the dipolar interaction of N ≫ 1 identical atoms with a scalar radiation field. Here, atoms are taken as non-degenerate, two-level systems. The energy separation between the excited state |e and the ground state |g , including radiative shift, ishω 0 and the inverse lifetime of the excited level is Γ 0 . Indeed, this two-level atom model neglects the the energy structure of a real atom, but as selection rules restrict the allowed transitions between states, this approximation is more than a mathematical convenience.
We consider a one-dimensional geometry where the atoms are randomly distributed along a line. Moreover, only modes of field that belong to an elongated pencilshaped radiation pattern parallel to the inter-atomic axis are taken into account. This radiation pattern, obtained in a pencil-shaped cavity, corresponds to the directional emission along the cavity axis [12,13].
We neglect recoil effects and the Doppler shift by assuming that the typical speed of the atoms is large compared tohk/µ but small compared to Γ 0 /k where k is the radiation wavenumber and µ is the mass of the atom. Additionally, we neglect retardation effects so each atom is allowed to influence the others instantaneously.
III. DICKE STATES AND COOPERATIVE EFFECTS
The absorption of a photon by a pair of atoms, each in its ground state and, respectively, located at r 1 and r 2 , leads to a configuration where one atom is excited while the other is de-excited. The possible configurations can be represented by the Dicke states [1]. The singlet Dicke state is and the triplet Dicke states are |11 = |e 1 e 2 , These states are characterized by an effective interaction potential and a modified lifetime as compared to independent atoms. For the one-dimensional geometry considered here, the cooperative spontaneous emission rate or the inverse lifetime of the states |± > is [13] where k 0 = ω 0 /c and r = |r 1 −r 2 |. The corresponding cooperative radiative level shift or the interaction potential is given by ∆E ± = ±h 2 Γ 0 sin k 0 r. For comparison, in a three-dimensional system, the cooperative spontaneous emission rate is [14] and the corresponding cooperative radiative level shift is given by ∆E ± = ∓h 2 Γ 0 cos k 0 r/k 0 r. When the atoms are close enough (k 0 r ≪ 1), the Dicke limit is obtained in both geometries, namely, Γ ± = (1 ± 1)Γ 0 . But, when the atoms are well separated (k 0 r ≫ 1), the single-atom limit is not recovered in eq. (3) since the one-dimensional inverse lifetime is a periodic function of the inter-atomic distance, while the three-dimensional one falls off with the inter-atomic separation. Similarly, the range of the one-dimensional interaction potential is infinite, while it is finite in the three-dimensional case. This fundamental difference will be the driving effect in the calculation of photon escape rates from an atomic gas in the next section.
IV. PHOTON ESCAPE RATES FROM ATOMIC GASES
To go beyond the case of two atoms, we follow [11,12] who studied the equation of motion for the reduced atomic density operator ρ of a gas of atoms with a single excitation. The time evolution of the ground state population associated with ρ is given by where |G = |g 1 , g 2 , ..., g N and S ± i is the raising (lowering) operator of atom i. U is an N ×N Euclidean random matrix as defined hereafter. For the one-dimensional geometry while for the three-dimensional gas where r ij = |r i − r j | is the random distance between any two atoms. With the help of the eigenvalue equation of i S ± i are the collective raising and lowering operators. Thus, we can interpret the eigenvalues Γ n of the coupling matrix U as the photon escape rates from the gas and define the dimensionless average density of photon escape rates as where the average, denoted by · · ·, is taken over the spatial configurations of the atoms.
The quantity used as a measure of photon localization is the normalized function C = 1 − 2 ∞ 1 dΓP (Γ). C thus measures the relative number of states having a vanishing escape rate. In the three-dimensional case, discussed in [11], the function C exhibits a scaling behavior over a broad range of system size and disorder. The scaling variable is N/N ⊥ , where N ⊥ is the number of transverse photon modes in the system. C allows to compare the different contributions of disorder and cooperative effects to photon localization, although an unambiguous distinction between the two mechanisms can not be achieved in the three-dimensional geometry [11]. In this Letter, we will show that in the one-dimensional case C exhibits a scaling behavior as well, and the scaling variable is N . We will also show that the expression of C obtained in the one-dimensional geometry is valid for both ordered and disordered systems, thus it is possible to unambiguously discern between the contributions of disorder and cooperative effects to photon localization.
The eigenvalues of U in eq. (7) have been obtained numerically in [11]. Recently, based on the Marchenko-Pastur law [15], the authors of [16] have approximated the spectrum of this matrix in the limit of large systems. Here, we will provide an analytic diagonalization of (6) that holds for an arbitrary system size.
To that purpose, we consider N ≫ 1 atoms confined in a one-dimensional system of length L = 2πa/k 0 . The atoms are randomly distributed with a uniform density N/L and the corresponding coupling matrix is given by eq. (6). The average density of photon escape rates, obtained for many random configurations of the atoms, is presented in fig. 1 for different values of W = N/2πa and a. A remarkable difference between the one and the three-dimensional case [11] is observed. Unlike the threedimensional geometry, the single atom limit is never reached and the photons are always localized in the atomic gas.
Let us distinguish between two regimes, Dicke regime where a ≪ 1 and the large sample regime, where a ≥ 1. In Dicke regime the coupling matrix is Thus, the average density of photon escape rate is given by as presented in fig. 1(d). Eq. (10) holds in Dicke regime of the three-dimensional case as well. The spectrum of U given above yields C = 1 − 2/N . For the current case where N ≫ 1, C = 1 indicating photon localization. Away from the Dicke limit, in the large sample regime shown in fig. 1(a)-(c), P (Γ) is calculated as follows. The N × N matrix U ij = cos k 0 r ij may be rewritten as U = 1 2 A † A, where A is the 2 × N matrix defined by A 0j = e ik0rj and A 1j = e −ik0rj . As U is a real symmetric matrix, its non-vanishing eigenvalues can be found from those of U † , given by Here M = N k=1 e 2ik0r k is a random variable where k 0 r k is uniformly distributed over [0, 2πa]. Since the two eigenvalues of eq. (11) are the spectrum of U is given by (13) We can estimate |M | by writing where the second term involves N (N − 1) terms. On average over non-correlated disorder the second term vanishes so that |M | ∼ √ N . For the spectrum of U given in eq. (13) it is evident that C = 1 − 4/N . Thus, for large values of N the photons are localized in the gas.
In order to calculate exactly the distribution function of |M |, first we assume that a is an integer. In this special case, the distribution function is just the Rayleigh distribution, whose mode is N/2. Figure 2 shows the eigenvalue distribution spectrum of U for a = 1 (excluding the degenerate subradiant mode at Γ = 0) as well as the calculated P (Γ) given by eqs. (12) and (15). In the general case, for an arbitrary value of a, we follow [17], as described below. As their joint distribution is given by where R = |M | and tan Θ = Im(M )/Re(M ). The required distribution function is obtained by an angular integration of eq. (16). Performing the integration leads to the following infinite series of Bessel functions: and Here, is the modified Bessel function of the first kind, ǫ 0 = 1 and ǫ n = 2 otherwise. Figure 3 shows the numeric spectrum of eq. (6) for a = 1.3 (excluding the degenerate subradiant mode at Γ = 0) as well as the corresponding calculated P (Γ). In the special case, considered earlier, where a is an integer, it is easy to check that m 1 = m 2 = 0 and v 1 = v 2 = N/2. Thus, eq. (15) is recovered.
V. DISCUSSION
The fundamental difference between the one and threedimensional geometries, i.e., the existence or the absence of a crossover between delocalized and localized photons, is due to the different nature of the coupling matrices. While U falls off with the inter-atomic separation in the three-dimensional case, it is a periodic function of the inter-atomic distance in the one-dimensional geometry. Thus, the single atom limit is never reached.
Let us stress that eq. (13) is valid for both ordered and disordered media (in the case of an ordered system M is not a random variable, but eq. (13) still holds). Since the disorder affects only two eigenvalues, namely λ ± , P (Γ) is comprised of N − 2 vanishing eigenvalues regardless of disorder, and C = 1 for N ≫ 1. Therefore, cooperative effects and not disorder is the mechanism that leads to photon localization in the case considered here. The same claim is valid in Dicke regime as well, since eq. (10) holds for both ordered and disordered media. This unambiguous distinction between the contributions of disorder and cooperative effects to photon localization cannot be achieved in the three-dimensional geometry, where the role played by each of these two mechanisms cannot be determined separately [11].
The distribution of resonance widths in onedimensional disordered media has also been studied in [18], where it has been shown that it follows a power law P (Γ) ∼ Γ −1 decay. The spectrum in eq. (13) does not, however, obey this power law. The difference stems from the fact that the authors of ref. [18] have calculated P (Γ) using the real part of the spectrum of the complex-valued Green matrix exp(ik 0 r ij ), which describes propagation of a wave scattered by a dipole at r i to a dipole at r j . Here, we have taken a different approach [11,12] and studied the time evolution of the ground state population associated with the reduced atomic density operator of the system. As explained earlier, in our treatment one can interpret the eigenvalues of the real-valued matrix cos(k 0 r ij ) as the photon escape rates from the atomic gas. According to [18], the P (Γ) ∼ Γ −1 behavior can be interpreted as an unambiguous signature of Anderson localization of light in random systems. The fact that our result does not follow this power law supports the claim that cooperative effects and not disorder is the mechanism that leads to photon localization in the case studied here.
It is interesting to compare these results to the twodimensional case, where U ij = J 0 (x ij ) [19] where x ij is the dimensionless random distance between any two atoms. In this geometry, when the atoms are close enough the Dicke limit is reached and eq. (10) holds, as in the other geometries. In the opposite limit, U can be approximated as U ij ≃ 2/πx ij cos(x ij − π/4), and since it falls off with the square root of the inter-atomic separation, the single-atom limit can be reached. We conclude that the absence of single-atom limit is specific to the one-dimensional geometry.
Recently, the authors of ref. [20] have studied the interplay of disorder and superradiance in a one-dimensional Anderson model in which all the sites are coupled to a common decay channel with equal coupling strength. By diagonalizing the corresponding non-Hermitian Hamiltonian, the participation ratio of the eigenstates have been obtained. It has been shown that while subradiant states become localized as disorder increases, superradiant states remain delocalized. These results differ substantially from ours. The difference stems from the fact that the authors in ref. [20] have considered an Anderson model with on-site disorder and assumed that the sites are coupled to the continuum with equal coupling strength, while in our treatment only the positiondependent continuum coupling is taken into account.
VI. SUMMARY
We have studied cooperative effects in a onedimensional random atomic system. By an analytic diagonalization of the Euclidean random matrix U ij = cos x ij , where x ij is the dimensionless random distance between any two atoms, we have calculated the photon escape rates from the gas for an arbitrary system size. We have shown that the single-atom limit is never reached and the photon is always localized. This localization stems from long-range cooperative effects and not from disorder as expected on the basis of the theory of Anderson localization. | 2013-07-07T17:40:05.000Z | 2013-03-01T00:00:00.000 | {
"year": 2013,
"sha1": "501809b95511848d14579cb5831c7e5c3e8c5882",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1307.1888",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "501809b95511848d14579cb5831c7e5c3e8c5882",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264591667 | pes2o/s2orc | v3-fos-license | Roastgsa: a comparison of rotation-based scores for gene set enrichment analysis
Background Gene-wise differential expression is usually the first major step in the statistical analysis of high-throughput data obtained from techniques such as microarrays or RNA-sequencing. The analysis at gene level is often complemented by interrogating the data in a broader biological context that considers as unit of measure groups of genes that may have a common function or biological trait. Among the vast number of publications about gene set analysis (GSA), the rotation test for gene set analysis, also referred to as roast, is a general sample randomization approach that maintains the integrity of the intra-gene set correlation structure in defining the null distribution of the test. Results We present roastgsa, an R package that contains several enrichment score functions that feed the roast algorithm for hypothesis testing. These implemented methods are evaluated using both simulated and benchmarking data in microarray and RNA-seq datasets. We find that computationally intensive measures based on Kolmogorov-Smirnov (KS) statistics fail to improve the rates of simpler measures of GSA like mean and maxmean scores. We also show the importance of accounting for the gene linear dependence structure of the testing set, which is linked to the loss of effective signature size. Complete graphical representation of the results, including an approximation for the effective signature size, can be obtained as part of the roastgsa output. Conclusions We encourage the usage of the absmean (non-directional), mean (directional) and maxmean (directional) scores for roast GSA analysis as these are simple measures of enrichment that have presented dominant results in all provided analyses in comparison to the more complex KS measures. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-023-05510-x.
Background
Gene-wise differential expression is the most common analysis of high-throughput expression data generated with microarrays or RNA-sequencing.Subsequent analyses include the screening of the data at broader scales whose measurement unit are groups of genes with common biological functions.
There is a multitude of methods to evaluate aggregated gene expression changes in functional gene sets under different experimental conditions.These are typically classified on the basis of the statistical test being used [1]: (a) self-contained approaches assess whether the observed gene set association with the experimental condition can be expected by chance, without making any reference to other genes in the genome [2][3][4]; and, (b) competitive approaches aim to determine whether such association with the experimental condition is more extreme than that observed in comparable gene sets in the data [5,[6][7][8][9][10][11][12].
Depending on the approach, the distribution underlying the null hypothesis has been approximated non-parametrically based on either gene randomization [6,13] or sample randomization [2,3,6] approaches.Gene randomization is associated with competitive testing whereas sample randomization is presented as selfcontained or competitive depending on the test statistic used.Similarly, parametric approximations of either type have previously been developed [4,[8][9][10].
Gene Set Enrichment Analysis (GSEA) [6], one of the most widely used methods for enrichment in the biomedical community, computes a Kolmogorov-Smirnov-like (KS) test that compares the differential expression effects in genes belonging to the target gene set against the rest of the genes in the genome.For a sufficient number of observations, sample permutations are used to maintain the integrity of the intragene set correlation structure in defining the null distribution of the test, resulting in a hybrid approach that combines a competitive statistic with sample randomizations to define the null distribution.However, for small sample sizes (fewer than 7 per experimental condition), p-value granularity becomes a severe problem and gene permutation is recommended instead [14].This approach, commonly known as GSEAPreranked, overlooks the underlying gene-correlation structure of the testing set, thereby compromising the control of the false positive rate when the intra-gene set correlation exceeds that expected in randomly selected gene sets [15].
Smyth et al. [2] proposed the more general procedure of rotating the residual space of the data, which is useful even for small degrees of freedom.Both the selfcontained test (roast) and its competitive version (romer) have been implemented [16].The romer methodology can be considered the most general gene and sample randomization GSEA approach in the current literature [17], and it is the focus of this work.However, in our opinion, the test statistics provided in romer, which are all functions of the moderated t-statistics ranks, are too limited.
In this paper we review the rotational approach for linear models presented in [18], which motivates the roast method for enrichment, and propose to complete the romer functionality by providing other statistics used in the GSA context.We compare the performance of the KS-based test statistics introduced in GSEA [6] and Gene Set Variation Analysis (GSVA) [19] methodology, as well as re-standardized statistics based on summary statistics measures [7] using both simulated and benchmarking data [20].Furthermore, as complementary information to interpret the output of the roast GSA methods, we introduce the concept of effective signature size as a proxy for the total number of uncorrelated genes in the testing set that can be directly linked to the power of the statistical test being used.All the measures addressed, as well as the approximation of the effective signature size, are implemented in the Bioconductor R package roastgsa.
Rotations based gene set enrichment analysis
Rotation tests for multivariate linear regression were first proposed in [18] as generalization of standard permutation tests, with the assumption of multinormality.If such distributional assumption is correct, rotation tests have the great advantage of being applicable to complex models even for small sample sizes.In [2], the rotation approach is adapted to be used as the most general GSEA tool, both for competitive and self-contained testing.
Briefly, the rotation approach consists of the following assumptions and operations: Let Y i be a q-dimensional vector, independent for any i ∈ [1, ..., n] , that represents the gene expression profile of the ith sample with the following multivariate normal distribution assumption: where X is a n × p design matrix with p − k adjusting covariates and k covariates of interest.The p × q matrix B contains the linear regression coefficients and r is the error covariance matrix.The main steps of the rotational approach proposed in [2] can be summarized by: (1) QR decomposition of X to estimate the regression coefficients of interest for the q genes and their corresponding error variance.(2) When q is sufficiently large, the moderated t-statistic, as defined in the limma methodology, can also be computed and used for further calculations of the enrichment score.This t-statistic updates the error variance of the linear models using the information of the estimated variances for all genes based on empirical Bayes posterior means.The prior distribution is obtained by fitting a scaled F-distribution to the sample variances.The posterior distribution is the weighted average of the estimated location of the prior distribution and the sample variances.Weights are determined by the degrees of freedom of the estimated F-distribution and n − p , respectively.Moderated t-statistics for all genes are further transformed to z-scores using the quantile function of the Student-t distribution.This is especially useful when the number of degrees of freedom left in the model is small, and the observed t distribution is heavy-tailed.(3) For any testing gene sets, a GSA summary test statistic is calculated using the (z-score transformed) moderated t-values.Depending on the proposed statistic, hypothesis testing is considered either competitive or self-contained.(4) Rotation applied to the residual space of the data can be handled by conditioning only on sufficient statistics of the unknown covariance matrix XY .Rotation statis- tics can be estimated and used to define the null hypothesis for hypothesis testing (details in Additional file 3: Sects.1-2).
The Roast algorithm is implemented in the R package limma [16].
Defining the null hypothesis and GSA summary test statistics
We present several GSA summary statistics with different goals and interpretations that can be used for roastgsa (Tables 1 and 2, Fig. 1).We discriminate between the types of test statistics that can be considered for self-contained hypothesis (SC), in which the observed coefficients of the model for the gene set of interest are compared to what could be found by chance if new data were observed, and competitive hypothesis (CO), in which the evaluation is done after centering and scaling the scores for the gene set of interest against what is observed in the whole genome, thus taking into account the rest of the genome for testing.For both types of hypothesis testing problems, the proposed summary statistics can maintain the integrity of either the distributional or locational null hypothesis or both (Table 3, Fig. 1).Specifically, the T mean (both for CO and SC), T meanrank (only CO) and T median (CO and SC) are scores that maintain the integrity of both the distributional and locational hypothesis.The mean is provided to measure the common directional behavior of the testing set.The median and meanrank are robust measures to outliers that can prevent giving importance to gene sets with only a few influential genes at the expense of losing statistical power.These two scores can serve to rank gene sets in battery testing when extreme values are undesirable.We also present the T maxmean (CO and SC), the T ksmax (CO) and the T absmean (CO and SC).These three scores do not control the locational null hypothesis error rates unless the more restrictive distributional hypothesis is imposed.The maxmean uses the moderated t magnitudes of only the most prominent direction, either positive or negative.This is relevant to pick up the main trend of the gene set without compromising statistical power.The ksmax is the original score for GSEA [6], and, similarly to maxmean, it looks for concentration of genes in the testing set in either of the two extremes of the ranked list of genes.The 1 and 2 Table 3 Formulation that distinguishes between distributional and locational hypotheses for both self-contained and competitive schemes type
Distributional hypothesis Locational hypothesis
self-contained absmean is the only non-directional score presented here which is found useful as a way to capture the activity of highly significative genes in the testing set, regardless of their direction.Finally, the ksmean (CO) uses a similar KS statistic to the ksmax but penalizes effects with contrary directions, hence it controls the rejection rate under the locational null hypothesis when the distributions in the two directions are equal.
Effective signature size of a gene set
Gene sets in publicly available databases, such as in the Broad Hallmarks collection, are specifically built based on modules of coordinated genes [21] (Additional file 1: Fig. S1).Moreover, pathways for other collections such as KEGG or Gene Ontologies might also show high gene-to-gene correlation.This can be attributed to biological co-regulation or to technical biases, which might still be detected even when the effect of the known covariates has been adjusted a priori.Generally, the variance of summary statistics increases with the intra-gene set correlation (Additional file 1: Fig. S2a).This apparent loss of precision implies an incorrect assumption of independence between the genes in a gene set.To capture the degree of this discordance, we define the notion of effective signature size of a tested gene set by the total number of genes that are needed, if these were selected at random, to achieve the same summary statistic variance as that of the testing set.The effective signature size can be interpreted as a realistic measure of the total number of independent variables that contribute to the variance of the statistic and thus affect the power of the test.
To get an estimate of the effective signature size, sample variances of rotation scores for randomly generated sets of size m, v S2b).A p-value that approxi- mates the probability of obtaining a variance as extreme as v s in randomly selected sets of size m is computed by:
Results
Specification and performance comparisons of the presented roastgsa statistics is provided in Fig. 2. Full results and discussion are detailed below.
Microarrays simulation model
We simulate data following a multivariate normal distribution, i.e., with X i = 0 for i ≤ n/2 and X i = 1 for i > n/2 .With the objective to use a gene-to-gene linear dependence structure that could be observed in a real case study, the covariance is determined by shrinking the sample correlation matrix of the metabric data (Additional file 3: Sect.3) by the Identity matrix (to find a positive definite matrix).
The expected values βX i for all genes measured in the metabric data are specified with regards to the scenario of testing under consideration (simulation scenarios are presented below).
RNA-seq simulation model
We consider the gene expression counts matrix observed in the GTEX-Breast samples, from mammary tissue (Additional file 3: Sect.3), and randomly assign n samples to two groups of size n/2 and add signal to the initial counts using the binomial thinning approach implemented in the seqgendiff R package [22], function thin_2group.Log2-fold changes for all genes measured in the GTEX-Breast data are specified with regards to the scenario of testing under consideration (simulation scenarios are presented below).Matrix counts are log-transformed with regularization using DESeq2 [23], rlog function, prior to roastgsa testing.
Evaluation criteria
We take 1000 instances of the simulation process with n = 6, 10, 20, 30, 100 (sampling multivariate normal data for microarrays or downsampling GTEX-Breast counts + binomial thinning for RNA-seq).From these n samples, the condition of interest is determined by a factor variable that takes values 0 and 1 randomly (n/2 times each).Moderated t-statistics are estimated for each instance of the simulation process.We use 500 rotations for approximating the p-values.To evaluate the performance of the roastgsa scores, we compute the proportion of times (from the total 1000 instances) that the test is rejected at a significance level of 0.05.Fig. 2 Characteristics of all presented scores: performance in simulated data is measured from 1 (poor) to 10 (great) based on the obtained recovery rates (the average recovery rate relative to the best rate); performance in benchmarking data is measured from 1 to 10 based on the M1 ranking; computational time is measured relative to the fastest method; Scores that were implemented in limma are specified for both romer (competitive scores) and roast (self-contained scores) functions
Simulation scenarios
We consider five different biologically meaningful scenarios to evaluate the performance of the methods (Additional file 1: Fig. S3): • (SC0) There is no effect of the condition of interest on the expression of the tested gene set.• (SC1) All genes in the tested gene set have the same expected fold change, which is larger than the global expected fold change.• (SC2) Only a group of interconnected genes in the gene set have a common activity in the gene set.• (SC3) Two groups of genes, one up-regulated and the other down-regulated, are active in the gene set.• (SC4) Few genes present a much higher effect than the rest of the genes (outliers).
SC0 is a clear consideration of a model under the null hypothesis to evaluate the empirical size of the test.SC1 and SC2 could be strategies to evaluate the power of the test, as target gene sets under these two models are likely to be considered biologically relevant.SC3 occurs less frequently in public databases but its recovery might also be useful to researchers.Targeting gene sets under SC4 is slightly more undesirable.
To mimic biologically realistic correlation structures of the test gene sets in microarrays simulations, we consider two gene sets from the literature that show substantially different intra-gene set correlations (Additional file 3: Sect.3): genes in the (A1) TNFA signaling via NFKB hallmark with a mean correlation of 0.10, and genes in the (A2) interferon alpha hallmark with a mean correlation of 0.27 (Additional file 1: Fig. S1).Besides these two pathways, we consider an artificial case control with 31 uncorrelated genes (A3).For RNA-seq data, we consider only the SC0, SC1 and SC2 scenarios.For SC1 and SC2, we take two distinct gene sets, one with a cluster of highly correlated genes (average correlation of 0.22) and another with randomly selected genes (with an average correlation near 0).Importantly, for the simulations, we assume that all genes remain either unchanged or are affected equally by the condition, with the exception of the genes in each test gene set considered, which are enriched (as specified in Additional file 2: Table S1-S2), one gene set at a time in independent simulation instances.
Performance of statistics using simulated data
Recovery rates for SC0-SC4 are compared across roastgsa scores, and the complete tables are presented in Additional file 2: Tables S3-S9.False positive rates are controlled for all presented scores.In terms of statistical power, on the one hand, scores that aim to capture the common activity of the pathway, such as the mean, ksmean or mean rank, do well in SC1 but fail to find good recovery rates for scenarios such as SC2, SC3 and SC4, where only a few genes from the whole testing set are differentially expressed.On the other hand, the maxmean and absmean do not penalize for non-global activity, as it happens in more democratic scores such as the mean or meanrank, leading to the largest recovery rates for SC2, SC3 and SC4.Strikingly, the absmean score loses power with respect to the maxmean approach for structures with low correlation (A3 in microarrays and SC2-lowcor in RNA-seq).Finally, the ksmax provides poorer recovery rates than the maxmean, with the latter defining a much simpler statistic for interpreting the outcome.These results are confirmed in both RNA-seq and microarrays data.
Microarray and RNA-seq benchmarking data
The GSEABenchmarkeR package [24] facilitates 42 datasets that are part of the GEO2KEGG microarrays compendium [20], in which investigated phenotypes were associated with specific diseases.Additionally, the GSEABenchmarkeR provides 16 TCGA datasets with the gene expression (RNA-seq) profile for patients with different types of cancer and also for a few samples with adjacent normal tissues.For each dataset, the relationship of several KEGG gene sets with the disease under investigation was rated by a "relevance score"(MalaCards, [25]), with the highest scores corresponding to gene sets largely associated with the disease.
These data have served as a benchmark to compare the performance of the presented GSA test statistics for battery testing.The outcomes of the roastgsa are ranked from the most significant ( I = 1 ) to the least significant ( I = p ) gene set and are compared to the MalaCards relevance scores (which we denote by ρ ) using the following measures: The measure M1 uses the ranks of all pathways in a weighted average whereas in M2 only the top 50 pathways contribute to the performance measurement.This second measure is proposed to reduce the importance of gene sets at the bottom of the rankings, which tend to be overlooked when doing screenings of battery testing.
To evaluate the performance of the roastgsa approach, we first compare the M1 measure to what could be obtained if gene set rankings were found by chance.This was done by permuting the order of the gene set outcomes 1000 times.A p-value was calculated as the percentage of cases with the observed value being inferior to the permutation-based instances.
For the microarrays compendium, since relevance scores from different datasets are difficult to compare [24], for every dataset, we rank the performance of the seven GSA test statistics (from best 1 to worst 7) based on their M1 (and M2) ratings.
Performance of statistics using benchmarking data
In microarray data, the absmean score achieves the most similar rankings to the benchmarking data of all the studied methods, with the maxmean being slightly better than the ksmax (Additional file 1: Fig. S4-S7, Additional file 2: Table S10).
In the RNA-seq data, the absmean is the only score that obtains satisfactory rankings (9 out of 16 datasets have more extreme values of M1 than expected by random permutations, with α = 0.10 ).The rankings for the rest of the scores are poor, with only 2 out of the 16 datasets presenting M1 measures not expected at random (Additional file 1: Fig. S8).
Computational complexity
In terms of computational complexity (Table 4), the absmean, mean, maxmean, and meanrank are the fastest scores to compute.The median statistic requires slightly more time than the mean while the KS-based statistics take considerably longer times than the other summary statistics.
Visualization of roastgsa results
In the roastgsa R package, we implement several alternatives to visualize the results.The moderated t-statistics observed in the gene set of interest, which are centered and scaled when considering competitive testing, are represented as in Fig. 3a.This representation can be easily linked to the GSA test statistics used for enrichment.For example, the mean score can be related to the difference between the area for the positive scores and the area for the negative ones (separated by the dashed vertical line), or the maxmean can be characterized by the largest area (either the area with positive scores or the area with negative scores).The KS random walk enrichment plot associated with classic GSEA is still the most frequent representation for this type of enrichment analysis.Although this representation can be difficult to relate to simple summary statistics, we also included it as part of the roastgsa outcome (Fig. 3b).A p-value curve with the effective signature size (Fig. 3c) is helpful for linking the statistical significance of the tested set with the trend observed for all moderated t-statistics shown in Fig. 3b-c.For instance, a strong tendency in either side of a large part of the genes in the set of interest might not always correspond to statistically relevant results when genes are strongly correlated.This is due to the variance of the test statistics under rotations decreasing with the effective signature size, not the signature size itself.This graphical visualization is provided to guide the interpretation of the results.We complete the roastgsa outcome with a heatmap that shows a full landscape of the gene set activity of the testing set (Fig. 3d).
Conclusions
This work reviews the rotation testing approach for gene set analysis and compares the performance of the method under different enrichment score measures using both simulated and benchmarking data.The absmean (non-directional) and maxmean (directional) scores are simple measures of enrichment that present dominant results in all provided analyses in comparison to the more complex ksmax measure.Similarly, the mean or meanrank statistics find similar powers to ksmean, the latter being much more computationally challenging.Following these empirical results, and also given the conclusive results in the work by [7], we encourage the use of simpler measures for GSA such as the (leading method in our comparison) absmean, or the directional scores maxmean and mean.Choosing between the absmean, maxmean, or mean should depend on the type of gene sets that are given priority for recovery.We distinguish between these two clear scenarios: [A] common activity in all genes; and [B] a few active genes but with large effect sizes.In our simulations, we presented one case under A (SC1) and three distinct cases under B (SC2-4).The absmean score would favor gene sets under scenario Fig. 3 Roastgsa output figures: a the ordered moderated t-statistics in various formats: area under the curve for all genes ordered by moderated-t statistic, barcode plot for these ordered values and density; b classic GSEA plot c effective signature size p-value curve that determines the number of randomly selected genes needed to obtain levels of variability in the rotation GSA scores as extreme as the rotation GSA scores variance in the testing gene set; d normalized expression values and gene set statistics to represent the variation across samples for the gene set of interest B over gene sets under A. In fact, we observed that the absmean score could lose power with respect to the mean score due to a combination of both low effect size and a relatively high percentage of activated genes (Additional file 3: Sect.4).On the other hand, there are the mean, meanrank and ksmean scores, which are designed to maximize recovery under the scenario A but have limited capacity to detect hits under scenario B. The maxmean falls in the middle and tends to be the second best in the two types of scenarios in our simulations.In the benchmarking data, some KEGG pathways contain both activator and inhibitor genes, which might explain why the absmean score outperforms the other scores evaluated.
We considered both microarrays (assuming multinormality) and RNA-seq data in simulation scenarios resembling real case studies.One aspect that we explored further in the RNA-seq data was the relationship between gene coverage and power for the roastgsa methods.For a fixed effect size, the power to detect differentially expressed genes increases with the total coverage.Consequently, true enriched gene sets with a higher percentage of lowly expressed genes are less likely to be detected at the same significance level as a gene set of the same size and higher overall expression (Additional file 3: Sect.5).
Although the main focus of this work was the comparison of the roastgsa scores using the roast rotations algorithm to define the null distribution, we also examined the performance of a widely used GSA approach, namely the limma method camera.We compared the roastgsa and camera competitive approaches using the benchmarking data.We found that the absmean and the maxmean approaches (roastgsa) outperformed the camera method, which found comparable results to the roastgsa mean approach (Additional file 3: Sect.6).
Commonly used GSEA plots typically provide information regarding gene variation after averaging out the sample variability (i.e., taking gene-wise fold changes or t-statistics as shown in Fig. 3a-b).We highly recommend complementing these plots with a graphic that also allows visualization of sample variability for the tested gene sets.If the dimensions are not too large, a simple heatmap, as shown in Fig. 3d (result from roastgsa R package), is useful to detect those genes that are activated in the process, as quality control to detect samples that can be highly influential in the analysis, and last and foremost, as a way to be honest with the total amount of data that is available for testing.
Fig. 1
Fig.1Scope of rotational gene set analysis: from gene set of interest to statistical significance.The enrichment scores mean, maxmean, median and absmean are proposed for both self-contained and competitive approaches.The meanrank, ksmax and ksmean are exclusive scores for competitive testing.All test statistics are defined in Tables1 and 2 ) for any m ∈ [1, . . ., m 0 ] and any gene set ran- domization instance l ∈ [1, . . ., L] are compared to the observed rotation scores variance v s for the tested gene set of size m o (Additional file 1: Fig.
Table 1
Formulation of summary statistics mean, absmean, median, and maxmean for both selfcontained and competitive testing
Table 2
Formulation of enrichment score functions meanrank, ksmax and ksmean, defined for competitive testing
Table 4
Computational time (system.timeRfunctionoutcome) for all proposed scoresExecution time obtained by using 10 replications of roastgsa on 50 gene sets, 500 rotations and N = 50 (25 per group).Ksmax and ksmean are computationally much more intensive than the other summary statistics | 2023-10-31T13:04:56.596Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "9d1a7c7108f5b9f59652045e2319c756decf1c4a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "90825044f6341605ada18594f269553e0d69eef9",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": []
} |
210191901 | pes2o/s2orc | v3-fos-license | Menopausal hormone therapy, blood thrombogenicity, and development of white matter hyperintensities in women of the Kronos Early Estrogen Prevention Study
Supplemental Digital Content is available in the text
W hite matter hyperintensities (WMH) on T2weighted magnetic resonance imaging (MRI) are associated with ischemic small vessel disease 1,2 and may precede exhibition of mild cognitive impairment. [3][4][5] Conventional cardiovascular risk factors (ie, age, hypertension, smoking, hyperlipidemia) are proposed to be risk factors for development of WMH, especially in elderly persons. 5,6 Other factors not traditionally considered as cardiovascular risk factors, specifically, thrombogenic microvesicles (MVs), have, however, been implicated in microvascular disease contributing to the formation of WMH. [7][8][9] Thrombogenic microvesticles are blood-borne, cell membrane-derived vesicles that carry surface markers of the cell of origin, as well as phospholipids and proteins, associated with coagulation and inflammation. 10 In recently menopausal women participating in the Kronos Early Estrogen Prevention Study (KEEPS) who were at low risk for cardio-and cerebrovascular disease as defined by a rigorous set of exclusion and inclusion criteria, 11 WMH increased over the course of the 4 years of the study. 12 An exploration of nontraditional risk factors that might affect cerebral blood flow and perhaps cerebral microvascular permeability suggested that the thrombogenicity and proinflammatory state of the blood, defined by activated circulating platelets and platelet-derived MVs at the time women enrolled in the study, that is before randomization to treatment, associated with development of WMH over the study course. 12 KEEPS participants were randomized to either oral conjugated equine estrogen (oCEE), transdermal 17b-estradiol (tE2) both with pulsed progesterone, or placebo pills and patch (PBO) over the 4 years of the study. 11 These formulations of menopausal hormone treatments (MHTs) were shown to affect platelet secretory products, reactivity, and aggregation. [13][14][15] Therefore, the aim of this analysis was to determine whether the type of MHT modified the association of WMH with thrombogenicity of the blood defined by a set of markers of platelet function and reactivity, intravascular cells, cell-derived MVs, endothelial activation, and inflammation.
Participants
Women enrolled in an ancillary MRI study of the KEEPS (NCT000154180) at Mayo Clinic were eligible for this study. KEEPS was a double-blind, placebo-controlled study to determine the effects of two different hormonal treatments on progression of atherosclerosis defined by increases in carotid intima-medial thickness in recently menopausal women. 11 In brief, women were between 42 and 59 years old and within 6 months to 3 years past their last menses at the time of enrollment. Women were excluded if they had a coronary artery calcium score of more than 50 Agatston Units, smoked over 10 cigarettes per day, had body mass index more than 35 kg/m 2 , had a history of cardiovascular disease, or had low-density lipoprotein (LDL) cholesterol higher than 190 mg/dL, triglycerides higher than 400 mg/ dL, diagnosis of diabetes, uncontrolled hypertension (systolic blood pressure >150 mm Hg and/or diastolic blood pressure >95 mm Hg) or current or recent (6 months) use of cholesterol lowering medications (statins, fibrate, or >500 mg/day niacin). Women were randomized to: oCEE (Premarin, 0.45 mg/ day); transdermal tE2 (Climera, 50 mg/day); or PBO pills and patch. Micronized progesterone was given orally (Prometrium; 200 mg/day) for 12 consecutive days each month to both active MHT groups. 11 The study was approved by the Mayo Clinic Institutional Review Board and all participants gave written informed consent.
Brain imaging
Women underwent MRI at baseline before randomization, and at 18, 36, and 48 months for measurement of WMH volumes on fluid-attenuated inversion recovery sequences as previously described. 16,17 Blood collection and analysis Women were asked to refrain from aspirin 2 weeks before blood collection. Fasting venous blood was collected into a syringe through a 19 gauge butterfly needle and dispensed into plastic tubes containing anticoagulants needed for each assay and maintained at 338C until processed within 30 minutes. 18 Platelet activation using this technique and as measured by surface expression of P-selectin and fibrinogen receptors is less than 5%. 19 Collections occurred at baseline before randomization to study treatments and at each study time point. 18 Total cholesterol, LDL cholesterol, and highdensity lipoprotein cholesterol, triglycerides, blood glucose, and 17b-estradiol were measured by Kronos Science Laboratories (Phoenix, AZ) and the Mayo Clinic Department of Laboratory Medicine and Pathology (Rochester, MN).
Platelet count was determined by Coulter counter, as previously described. 20 Expression of activated platelet membrane P-selectin and glycoprotein IIb/IIIa complex binding to PAC-1 antibody (an indirect measure of the expression of membrane fibrinogen receptor) was measured by validated flow cytometry techniques. [18][19][20][21] Total numbers of thrombogenic (phosphatidylserine-positive defined by annexin V binding) MV and other MV staining positive for selected cell-specific markers were measured using fluorophore conjugated recombinant proteins and antibodies by flow cytometry. 18,21,22 Statistical analysis Data reduction methods were used to achieve some degree of parsimony in these analyses so that the complexity could be reasonably supported by the sample size. Analysis of longitudinal WMH data was performed with a two-stage approach to first characterize the time-response profile and then assess differences in profiles across treatment. The goal of the first stage is to adequately summarize the repeat measurements into single WMH responses per person. For this, slope coefficients were computed using least squares regression in fitting each woman's time-response data with a linear equation. To minimize the impact of skewed values, we took the logarithm of WMH, with a constant of 1 added to the raw value, before modeling. One-sample Wilcoxon signed-rank tests for zero slopes were used to test for a significant change in WMH over 48 months separately in each group. The slopes were then assessed for treatment difference using the proportional odds ordinal logistic model, which provides a generalization of the Kruskal-Wallis test for pairwise testing. The results of the two-stage approach were compared with generalized least squares in which all the serial log-transformed data were analyzed jointly in a single model, and the correlation of the repeat measurements taken into account.
For each of the 14 cellular activation markers, repeat measurements on the same person were averaged across visits, and the resulting mean values were converted to normal scores based on their ranks. Scored dimensions of platelet reactivity and MVs were derived using principal component (PC) analysis on the 14 transformed variables. The multiple PCs were carried forward and analyzed for association with three-level treatment in multinomial logistic regression model (an extension of the binary logistic model for >2 unordered outcome categories), and for association with WMH response in a proportional odd ordinal logistic regression model while adjusting for randomized treatment group. P values are computed from the likelihood ratio x 2 statistic for the model that is due to the individual or multiple variables of interest, with the exception of pairwise treatment comparisons which are based on approximate Wald x 2 statistics.
RESULTS
Baseline characteristics of the 95 participants, a subset of the 118 KEEPS participants at the Mayo Clinic site for whom WMH data were available 12 did not differ across treatment group assignments except for smoking status (Table 1). WMH increased in all three groups over the 48 months of treatment (P < 0.001 each; Table 2). The extent to which WMH increased, as summarized by within-person slopes, differed across treatments (P ¼ 0.044), with pairwise comparisons indicating greater increases in oCEE than in PBO (P ¼ 0.011). Results from repeated measures modeling (model-predicted WMH at 48 months are shown by treatment group in Supplemental Table 1, http://links.lww.com/MENO/A510) were in reasonable agreement with the summary measure analysis, supporting the validity of the simpler two-stage approach,
HORMONES, THROMBOGENICITY, AND WMH
and with differences between oCEE and PBO reported in the original analysis (Tables 1 and 3 of reference 17). The average of each of the 14 intravascular cellular activation markers (6 platelet reactivity variables and 8 MV variables) over treatment follow-up is shown by treatment group in Table 3. Because of the multiplicity of variables, and to differences in directional change (increases or decreases compared to placebo) for each of the variables, we used PC analysis to reduce these dimensions to their most important components. The first five PCs were retained as they could explain most (62%) of the variability in the 14 standardized variables (Supplemental Table 2, http://links.lww. com/MENO/A511).
A global test of association with treatment for these five PCs approached significance (P ¼ 0.059), with partial tests revealing an overall group difference for PC 3 (P ¼ 0.006). PC 3 represents a contrast of platelet microaggregates, adenosine triphosphate secretion, basal expression of P-selectin, and fibrinogen receptor complex (PAC-1 binding) versus total platelet count and numbers of leukocyte-derived MV. The composite scores in oCEE were marginally to significantly higher compared to other groups (P ¼ 0.003 for oCEE vs tE2, and P ¼ 0.063 for oCEE vs PBO; Supplemental Table 3, http://links.lww.com/MENO/A512).
Using multivariable regression to test the joint influence of the PCs and treatment on the slope measure for WMH, the global contribution of all five PCs did not reach statistical significance (P ¼ 0.104). However, of the individual components PC 1 reflecting MV positive for expression of tissue factor, intracellular adhesion molecule 1 and vascular cell adhesion protein 1, and MVs derived from leukocytes and monocytes showed the most prominent effect (P ¼ 0.003). This finding indicates that, after controlling for treatment, the higher the composite score for PC 1 the greater the rate of increase in WMH. Also from this model, the association between treatment group and WMH increase persisted after adjustment for PC variables (P ¼ 0.009). Based on the global test of interaction on 10 degrees of freedom, there was no evidence (P ¼ 0.204) that the overall association between PCs and WMH increase differed by MHT (Table 4).
DISCUSSION
The results of this study support previous observations that blood thrombogenicity and proinflammatory status associate with WMH, 7-9,12 and extend those observations that this association may be influenced by factors other than the type and dose of menopausal hormones used for the treatment in the Kronos Early Estrogen Prevention Study.
JAYACHANDRAN ET AL
Factors influencing generalized inflammation and, indirectly, blood thrombogenicity, include conventional cardiovascular risk factors such as age, blood pressure, hyperlipidemia, insulin resistance, and life-style choices such as diet, activity, and smoking. In KEEPS, the conventional cardiovascular risk factors such as body mass index, blood pressure, triglycerides, HDL and LDL, glucose, and smoking status, however, did not associate with WMH at 48 months, which is consistent with other studies. 9,12,23,24 Other potential sources for inflammation in KEEPS participants are unclear. Only 5% of participants were current smokers, 12 but other behavior factors such as diet and activity were not analyzed, nor were potential sources of commensal or low-grade infectious or inflammatory conditions such as periodontal disease, asthma, or prior histories of hypertensive pregnancy disorders. 25,26 Each of these conventional and nonconventional risk factors may individually be insufficient to initiate an inflammatory response of the cells within the vascular compartment. Their collective effects of the endothelium, platelets, and monocytes may, however, reach a threshold to alter changes in the macrovasculature (carotid artery intima-media thickness) 18,27 and cerebral microvasculature affecting development of WMH.
In the present study, the overall association between treatment and the five PCs describing a number of cellular activity measures did not reach statistical significance at the P < 0.05 level. The PC 3 that represented a contrast of platelet reactivity measures, however, differed significantly in the oCEE compared to the tE2 or PBO groups. This result is consistent with previous findings of significant differences in platelet functions between tE2 and oCEE groups. 13,14,15 Effects of various genetic variants on the responses to treatment might also have masked potential treatment effects as genetic variants associated with metabolism and uptake of estrogen, with innate immunity, and with APOE e4 was observed for MHT effects measured by differences in chronological age for onset of menopause, in carotid artery intima-medial thickness, and deposition of b-amyloid in the brain. 9,28-32 In spite of these effects, after controlling for treatment, the overall association of the five PCs with increase in WMH reflected the strong positive correlation between PC 1 score and WMH increase. Taken together these results suggest that both MHT and the composite of the MV measurements explaining PC 1 show an independent effect on development of WMH.
There are several limitations of this study that should be considered. First, the results may not be applicable to the general population as the KEEPS enrolled recently menopausal women within a relatively narrow age range. In addition, these women were predominantly white, healthy, educated, and most were nonsmokers. The advantage of this homogenous population is that the findings may, however, reflect general physiological processes that are not confounded by manageable cardiovascular risk factors. Second, the influences of the MHT used in KEEPS on development of the WMH may not apply to other doses or formulations of MHT used in other studies. Third, the overall association between PCs and WMH increase did not apparently differ by MHT. However, the relatively small sample in our study may have limited the power to detect such a difference.
CONCLUSIONS
The findings of the present study are consistent with those of other investigations that implicate thrombogenicity of the blood and inflammation as contributors to development of WMH. 9,12,33 Activation of blood platelets, endothelium, and monocytes associated with development of WMH are most likely multifactorial including synergistic effects of conventional risk factors such as age, blood pressure, and components of metabolic syndrome. In addition, other potential sources of platelet and cellular activation such as effects of natural menopausal aging processes, adverse pregnancy histories, commensal infections, and comorbid inflammatory conditions and behaviors could have additive effects. Specific mechanisms by which these activated cells and MV affect cerebral microvascular function leading to formation of WMH remain to be determined. | 2020-01-15T14:08:20.423Z | 2020-01-13T00:00:00.000 | {
"year": 2020,
"sha1": "3c234048d05a28a25ef16ed54e864954dbb83167",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7050795?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f86be78d465b49a85ecec3762ef917d7a4361da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247322554 | pes2o/s2orc | v3-fos-license | SICOT PIONEER (Programme of Innovative Orthopaedic Networking Education and Research): Re-inventing global orthopaedic education, training and research
The outburst of SARS-CoV-2 infection, known as coronavirus disease 2019 (COVID-19), posed an unprecedented impact on the global health care system. It was declared a pandemic by the World Health Organization (WHO) on March 11, 2020 [1]. Several precautionary steps were undertaken to control the spread of the virus throughout the globe such as nationwide lockdowns, social distancing protocols, use of masks and personal protective equipment (PPE). These restrictions, especially lockdowns and social distancing, continue to impact the workflow patterns and pose several challenges for all the medical specialties, not barring the orthopaedic community, including orthopaedic surgeons and trainees [1–3].
Introduction
The outburst of SARS-CoV-2 infection, known as coronavirus disease 2019 (COVID- 19), posed an unprecedented impact on the global health care system. It was declared a pandemic by the World Health Organization (WHO) on March 11, 2020 [1]. Several precautionary steps were undertaken to control the spread of the virus throughout the globe such as nationwide lockdowns, social distancing protocols, use of masks and personal protective equipment (PPE). These restrictions, especially lockdowns and social distancing, continue to impact the workflow patterns and pose several challenges for all the medical specialties, not barring the orthopaedic community, including orthopaedic surgeons and trainees [1][2][3].
Impact of COVID-19 on orthopaedic departments, organizations and training
Since the evolution of the pandemic, reduced number of clinical cases have been observed in the orthopaedic outpatient departments (OPDs) along with the cancellations of elective surgeries resulting in lesser orthopaedic patient admission to hospitals [1,4]. Moreover, the task forces employed to manage the pandemic in various countries called for a delay or cancellation of all non-essential or elective surgeries. These decisions were taken to conserve healthcare resources (e.g., PPE, gloves, intensive care unit (ICU) beds, ventilators), reduce the risk of contagion transmission, reduce hospital admissions and enhance the capacity to support patients with COVID-19 [4,5]. Furthermore, one of the early studies conducted after the COVID-19 restrictions revealed a 32% reduction in trauma cases due to people remaining indoors during this period [6].
These unprecedented changes have affected orthopaedic training and fellowships in a very complex and colossal manner. Orthopaedic training, fellowships, conferences and meetings have waned at the expense of management of the viral spread [7]. Most face-to-face lectures, academic activities, teaching and other didactic activities, clinical ward rounds and case presentations conducted for the trainees were cancelled. Furthermore, majority of the teaching activities (lectures, case presentations, bedside teaching) were virtually transformed virtually, thereby significantly altering the clinical and surgical learning experience [1]. With the diminished number of elective or non-emergency procedures and trauma cases along with increased virtual outpatient clinics, orthopaedic surgeons got to spend less time managing emergency cases [7]. Even the roles of the fellows, residents and trainees were changed to share the burden of the pandemic in multiple capacities [1,5]. Reallocation of the orthopaedic surgeons as well as trainees occurred at many hospitals to manage the overwhelming challenge of increased COVID-19 patient load [2,5,7]. This impacted the learning curve of the orthopaedic trainees in an unprecedented manner [1].
Moreover, the limited number of procedures executed was recommended to be performed by consultant orthopaedic surgeons without involving the trainees to reduce the operation times and the spread of infection. Consequently, the surgical exposure for trainees was radically diminished. The impact of these reductions had resulted in some residents and fellows not being able to meet the minimum requirements of surgical training necessary to advance as an independent surgeon or consultant. Prolonging training has had to be considered by some in order to hone their subspecialty skills [5,8,9]. Additionally, international travel restrictions added to the complexity of access to training and significantly impacted international fellowship programmes. These restrictions led to the uncertainty among many fellows who had to cancel their training and/or visiting fellowship programmes [9,10].
The nationwide lockdown measures and other restrictions compelled changes in the approach to patient care. Various sectors of healthcare shifted to virtual clinics and telemedicine to reduce social contact. Concurrently, educational delivery also evolved, employing virtual methodologies like online lectures, e-learning tools, webinars and 3D-simulation techniques. Most conventional classroom teaching shifted to online group video platforms (e.g. Zoom, Cisco Webex, Microsoft teams) and pre-recorded lectures which has now pretty much become the new normal. This aimed to maintain some of the interaction orthopaedic trainees had with their mentors while keeping the didactic activities ongoing in a time of crisis. However, many other surgical training modalities such as cadaveric hands-on training and clinical postings in subspecialties (like paediatrics and elderly care) were either switched to teleconferencing or completely cancelled [2,10]. Reduction in the volume of elective and non-essential operations in addition to redeployment of the surgical residents to non-orthopaedic departments limited their clinical and surgical exposure, leaving them behind in their utmost required skill.
A similar pattern was also observed in other specialities as well such as gastroenterology, neurosurgery and urology [2]. Furthermore, the pandemic restrictions also impacted the exit exams of residents and trainees in many countries due to difficulty with conducting exams or postponements [10].
Stress and anxiety due to COVID-19 and its impact
The protracted nature of COVID-19 restrictions and constantly changing protocols has had a substantial impact on orthopaedic residents globally. Inadequate surgical exposure, safety concerns, the uncertainty of the exit exams, the ambiguity of completing fellowships on time, the fear of getting infected or transmitting the virus to their loved ones and so forth have led to an upsurge of mental health issues such as development of discomfort, stress, anxiety and poor sleep quality among the residents and trainees [1,11,12]. However, such stressful situations have not been limited to the orthopaedic community. Most other educational fraternities, businesses or employments worldwide have had to grapple with increased mental health issues [11,13]. A study conducted among university students (medical and non-medical) revealed that anxiety levels had increased during the pandemic with medical students having higher anxiety levels compared to non-medical student before the introduction of online teaching [11].
Increased stress amongst workers, induced due to the pandemic across different industries, affected productivity and thus led to an adverse economic growth. This has proven to be a motivator for many employers to embrace newer work approaches and promote a safer work culture. Changes in work practices aim to shield employees from the undesirable effects of COVID-19 at the workplace and help improve their performance. The flexibility provided to workers and introduction of newer technologies to access and collate information has helped workers perform better without the stress of COVID-19 [12]. Thus, several organizations in industries such as banking, information technology, education and finance have accepted virtual working as the new normal possibly the future work trend.
Several surveys were conducted in different countries to assess the online teaching methodologies being employed during the pandemic. The residents and trainees who participated in such surveys reported online-based teaching methods to be easier. However, many highlighted difficulties with online case presentations, in-patient clinical evaluation, pre-operative planning, post-operative care, clinical education at the patient's bedside and were thus unsatisfied with the online teaching methods compared to traditional learning methods [1,2,[14][15][16][17]. Furthermore, the surgical skills required by residents were lacking given the deprived exposure to surgeries or even cadaveric training. In a survey on 327 orthopaedic residents spread across 23 countries in Europe, 58.6% of trainees experienced difficulties in the execution of operations [2]. Henceforth, delivering didactic activities virtually may prove to be helpful to some extent but is certainly not conducive to all, of the needs of a surgical resident or trainee. This gap necessitated the evolution of a new era of education and training which utilizes modern tools like e-learning modules, virtual reality (VR), surgical simulations in surgery and learning management systems (LMS).
Integrating educational theories for the training of future orthopaedic surgeons
Many educational theories have been proposed for improving the learning curve in surgical disciplines. The traditional model of learning was true apprenticeship; however, over a period with the reduction of hours spent in surgical residency, this has moved on to competency-based training. It has also been observed that learning of psychomotor skills (mental and motor skills) required by a surgeon could be facilitated by simulations. The simulations can provide a safe learning zone without the pressure of operating theatres as well as ease of repetition for practice and achieving competence. Therefore, the understanding and application of educational theories such as the Fitts and Posner theory and the Ericsson theory might be beneficial to improving the learning curve of surgeons and to develop a training model for them [18,19].
The Fitts and Posner theory depicts a three-stage model to achieve motor skills involving trainees' performance at each stage. The first stage is the cognitive stage, which includes intellectualizing the task by demonstration and explanation; however, one may not be able to perform the task without errors. This level of attainment can be learned by a surgical trainee with the help of books, journals or lectures. The second stage, the associative stage, includes a translation of the acquired knowledge into proper motor skills. This stage can be achieved with repeated practice and feedback on performing the skill. In the third stage, the autonomous stage, the trainee surgeon may be able to perform the task smoothly and independently without any errors and with least mental effort, almost as an expert. However, achieving this stage requires continuous feedback and direct observation of the learner's skill in an operating room (OR) [19]. Regular repetitive practice, feedback and reinforcement play a major role in developing a skill to the level of an expert. Ericsson's theory is based on the principles of deliberate practice and allows for retention of the skill as an expert. Thus, it becomes a mandate for a surgeon to keep on performing a particular skill, frequently, to master it [18][19][20].
Both theories suggest the initial levels of competency may be attained by theoretical knowledge but the transition to expertise requires deliberate practice. Hence, simulationbased training must be an integral part of medical training to minimize any harm to patients. It is of utmost importance that following training, immediate formative feedback is provided based on the comparison of the trainee's performance to an established standard. Such a teaching model can be employed to optimize the training of surgical skills of the trainees or residents [19]. Simulation based training can be categorized as VR, physical or hybrid [21]. Integration of these models into the training curricula of young surgeons along with controlled supervision is essential in the pandemic situation.
In summary, digital teaching methodologies, such as online lectures and seminars, instructional videos for surgical procedures, simulation-based training models and virtual web conferences for discussions, have the potential to bring a radical change to the future educational modalities for surgeons. This pandemic, thus, has provided a novel and accelerated opportunity to bring such models into practice.
These training models may also help to minimize the stress and anxiety among residents secondary to the pandemic.
Virtual reality in surgical training
Virtual reality and simulation have been introduced in the world of surgery in recent years with application in a wide range of specialities. One of the significant applications of VR technology in the medical world is in education and training [22]. VR simulation has been applied in several specialities such as laparoscopy, cataract surgery, psychiatric therapy, pain management and traumatic brain injuries. It is also establishing a role in pre-operative planning, intra-operative triangulations and surgical training in various orthopaedic surgical procedures (e.g. arthroscopy, arthroplasty, reconstruction of fracture malunion) [23,24]. Currently, resident training involves arthroscopic simulators, fully immersive intra-operative simulators (trauma management and arthroplasty) and haptic simulators for bone drilling and reconstruction (fracture malunions) [23].
In recent years, orthopaedic surgery including arthroscopy and arthroplasty have advanced by leaps and bounds with more specialized techniques, specialized instruments and advanced navigation skills compared to routine arthroscopy. Given the learning curve of these orthopaedic surgery (knee and hip arthroscopy, arthroplasty, spinal surgery) with minimally invasive approaches is long and may be associated with complications, VR simulators can help to accelerate the learning curve. The use of VR technology for the training of residents in the early years can minimize the time required to develop complex surgical skills (including visuospatial, perceptual and psychomotor abilities) along with ensuring patient safety [25]. The VR technology can also provide an opportunity for the trainees to learn and master certain tasks involved in surgery by repetitive practice in an innocuous and benign environment. Furthermore, since all the movements of the trainees are recorded, formative feedback is possible that can enhance skill learning [26].
Moreover, a study conducted among 25 unsupervised medical students to assess whether their interest in orthopaedic surgery can be influenced by the use of arthroscopic stimulator revealed that using VR simulators made the students more interested in orthopaedic, surgery and arthroscopy. Furthermore, the students reported improvement in their surgery quality even without expert supervision and suggested VR simulation to be a mandate for their surgical training [27]. Several studies have reported beneficial results in surgical performance when using VR during the training of young surgeons or surgical residents [23]. A study on 24 young surgeons demonstrated a better success rate for pedicle screw placement when performed by young surgeons trained by an immersive VR simulator compared to surgeons exposed to the observation of spinal model and a teaching video of spinal surgery [24]. In addition, another study also reported improved intra-operative reconstruction surgeries of calcaneal fractures using computer-assisted pre-operative planning and virtual surgical technology [28]. Furthermore, a review of 31 articles evaluating the validity and efficacy of VR simulators showed improvement in performance in operating theatre as well as better skill acquisition due to repeated use of simulators [29].
However, the transferability of the trainees from VRto-OR (from simulation-based training to improved clinical performance in operation theatres) is a long path to be navigated and needs a structured curriculum to be developed based on educational theories. The competence criteria for the required skill set in trainees can be set in the simulators. This competency can be achieved by following a step-wise method to acquire mastery over a skill [26]. At SICOT, we have progressively transitioned to innovate surgical training, incorporating VR technology while evolving the educational model during these difficult times.
SICOT PIONEER
SICOT PIONEER (Programme of Innovative Orthopaedic Networking, E-learning, Education & Research) was born out of the need to innovate. The COVID-19 pandemic pushed the orthopaedic community into a tight squeeze, leaving us with no option but to transform digitally and integrate technology into routine activities. SICOT's Education Academy adopted this digital transformation which paved the way for the birth of the PIONEER project in June 2020; a steppingstone towards the digital education journey in orthopaedics. The different virtual educational options launched under PIONEER include the SICOT Virtual Fellowship Program (SICOTVfellow), SICOT Virtual Education Program (SICOTVed), SICOT Virtual Examination Platform (SICOTVexam), SICOT Virtual Surgical Training Program (SICOTVtrain) and Surgical Techniques/Podcasts. It strives to facilitate knowledge exchange among the whole orthopaedic community worldwide, including the most experienced senior consultants to the trainees in their initial years of residency.
PIONEER has hosted numerous webinars and podcasts (Tete-a-tete sessions) that have been live-streamed, incorporating a live discussion function for questions and comments. A custom-designed learning management system archives all the educational video content for the members to access anytime at their ease. Furthermore, video recordings of these e-events are also available on-demand as a feature item on the SICOT website. Moving forward, a wider constellation of e-events including interactive surgical demonstrations, panel-based discussions and accessibility in multiple languages will be a feature. As part of VTrain, we have developed a structured module to conduct teaching modules, assessments and accreditation for trainees in specific areas of orthopaedic practice. This virtual training will culminate in a hands-on cadaveric training at the annual SICOT Orthopaedic World Congress. Feedback from the members or viewers of the educational content video is constantly sought for further improvement.
Under the aegis of SICOT PIONEER, we have hosted 36 webinars till November 2021 with 36,732 live viewers and 21,783 on-demand viewers. The high number of viewership suggests substantial change in the learning behaviour during the pandemic. To gain feedback for further improvement and its success, we conducted a survey involving 7,763 members spanning the whole community. The seniority of the viewers was also evaluated depicting maximum participation from consultants (46.4%) followed by trainees (28.1%) and lastly, senior consultants (25.5%). This data suggests virtual educational programs may not be only helpful for trainees but also for young consultants. Furthermore, the viewers were found to be scattered across the globe with maximum viewership achieved from Asia (40.4%), followed by Europe (26.9%), the Middle East (17.2%), Africa (8.3%), South America (5.6%) and North America (1.0%). The viewership based on the country is depicted in Fig. 1.
The survey was conducted to assess the acceptance among the viewers for the quality of the content and whether the educational content meets their expectations. It was found that 69.9% of the viewers rated the webinars as 'excellent' (Fig. 2). Intending to assess the faculty who are delivering the webinars, the survey rated the faculty as average, good and excellent. Most of the viewers (69.5%) found the faculty to be 'excellent' (Fig. 2). Furthermore, the format of the webinar and the tools used were also evaluated and 57.1% of the viewers rated this as 'excellent' (Fig. 2). This survey result will help us constantly evolve the format of webinars and other virtual programs to deliver the best possible content in a way that is most relevant and supportive of traditional systems of learning.
Finally, the primary aim of the virtual training programs was to bring a positive change into the routine practice of the orthopaedic community. These virtual educational activities aim to bridge the gaps in traditional methods of learning and help steepen the learning curve of complex surgical procedures. Furthermore, they aide in lowering the stress and anxiety among the trainees during these difficult times of pandemic by facilitating their learning, even with reduced exposure to surgery. The impact of the virtual programs in the routine practice was assessed by surveying the viewers with regard to any changes observed in their practice. Positive feedback was received from 51.7% of viewers, while 35.7% of viewers answered 'maybe' (Fig. 3). Thus, these virtual educational programs may have a scope of improvement in their structure, delivery format or audience-engagement techniques in the future.
Futuristic plans of SICOT PIONEER include the development of a more structured and systematic e-learning ecosystem (learning management system) that can supplement the traditional methods of teaching and learning. The virtualization of these techniques may help to shorten the learning curve of several surgical procedures (e.g. knee/hip arthroscopy, ankle arthroscopy, ligament reconstruction). Thus, the progression from the cognitive stage to the autonomous stage (Fitts and Posner theory) for a trainee may be accelerated.
Discussion
The COVID-19 pandemic has overturned education, medicine and training, especially affecting the training of the surgeons to a great extent [30]. Restrictions such as social distancing, national lockdowns and international travel bans that have been put into practice to lessen the spread of the virus have greatly impacted the elective orthopaedic surgery being conducted globally [1][2][3]. This has resulted in a drastic reduction in the training activities of young surgeons and residents. Though these tough times are taking a toll on all of us, we at SICOT chose to innovate and Rating of the webinar aspects Average Good Excellent continue to deliver the best quality education to our members. The education and training of budding orthopaedic surgeons cannot be overlooked for a better future. This was the impetus that nurtured the conception of an innovative educational platform-PIONEER, to continue with the educational and teaching activities amidst challenging pandemic restrictions. The virtual and e-learning methods employed at present have several advantages over traditional methods of learning. It is more cost-effective as learning can occur with minimum infrastructure and equipment. Furthermore, no travelling is required as it can occur in the comfort of a home. Interactive features such as chat or messaging during the live sessions can actively initiate doubt clearing sessions for the learners. Pre-recorded demonstration videos of the surgeries can help the residents and junior doctors to learn surgery at their own pace. It also reduces the learning curve for complex surgical procedures. This model allows learners to learn from experts in their field rather than just the faculty in their respective institutions [30]. Hence, self-learning can also be improved. However, this could be applicable only for theoretical purposes and demonstration of videos on surgery. Integrating VR technology along with the e-learning modules can take the learning to the next level adding to the many advantages of this educational model.
Simulation-based learning models provide an opportunity to learn the steps involved in a surgery in a risk-free environment, thus ensuring patient safety. Moreover, the trainees can rehearse the surgical steps repeatedly at any time without any ethical concerns. The human feedback (visual and physical) provided to the residents on the surgical steps using VR simulators can accelerate the process of learning along with patient safety. The competence achieved by the trainee in surgical steps can also be assessed with pre-defined criteria in a reproducible and dependable manner. Furthermore, emerging technology will also be able to validate reconstructive/fracture constructs as well as affirm the VR-to-OR experience for residents [23].
However, this technology is still naïve and in its infancy. Some of the simulators provide low fidelity, while highfidelity simulators are expensive and limited at present. Moreover, there is limited evidence for the success of VR in the resident training curriculum and more detailed studies are still awaited. Furthermore, VR simulators are available for limited procedures and cannot be generalized across different surgery in orthopaedics. In addition, the coordination/ interaction required among the intra-operative team in an OR is a crucial skill to be mastered which is lacking in the presently available VR stimulators [23]. A lot of challenges still need to be dealt with before implementing this technology widely. One such challenge involves mimicking a real patient situation in a virtual patient, since precise replication of complicated human anatomy and physiology (bleeding vessels, leaking structures, etc.) is difficult. Moreover, haptic feedback, an integral component of surgery, is very limited in VR simulators. Some of the newer simulators may be able to mimic and provide haptic feedback but they are expensive. Thus, these prominent limitations of VR technology need to be negated in order to make it successful [23,26].
Overall, the future of integrating e-learning and VR technology holds high hopes for training of orthopaedic residents even after the COVID era. One of the studies evaluated the effectiveness of a five week structured virtual course teaching orthopaedic trauma via weekly online lectures and virtual interactive sessions to third year medical students. The average rating of the educational course was reported to be 4.98 out of 5. It was proven to be effective in providing the knowledge and preparation for basic skills required for fresh Fig. 3 Feedback from 7,763 webinar viewers concerning change observed in the practice due to these webinars or virtual educational activities orthopaedic interns and this can be considered as a reasonable alternative to traditional clinical orientation [31]. In conclusion, the virtual learning model could be employed as a useful tool for the supplemental training of acquiring basic skills in the OR settings. SICOT PIONEER is budding in this space and developing many virtual spokes encompassing not only conventional teaching and training modules but also surgical techniques, examinations and also virtual interactive forums for discussion and learning. | 2022-03-10T14:42:12.008Z | 2022-03-10T00:00:00.000 | {
"year": 2022,
"sha1": "623f748a202613c26d0b62465150dbad41b391d6",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00264-022-05354-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "623f748a202613c26d0b62465150dbad41b391d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267776209 | pes2o/s2orc | v3-fos-license | Meteorological and hydrological drought analysis of Sinop, Kastamonu, Bartın provinces in the Western Black Sea
: In recent years, a significant change has been observed in the climate characteristics of Turkey. These changes, which are compatible with the general trend of global climate change, are also felt in the Western Black Sea Basin. This paper reported on the calculation of drought indices, with emphasis on two recently developed indices, the Reconnaissance Drought Index (RDI) and the Stream flow Drought Index (SDI) using specialized software package, named Drin C (Drought Indices Calculator). Additionally, Drin C includes a module for the estimation of potential evapo transpiration (PET) through temperature-based methods (Hargreaves, Blaney - Criddle, and Thornthwaite) that can be used for the calculation of RDI. Therefore, in this study carried out in the Western Black Sea region, meteorological drought analyzes for1-, 3-, 6-ve 12 months were made for 3 precipitation observation stations in the provinces of Sinop, Kastamonu, Bartın, and monthly total precipitation and monthly average temperature data were used to determine the meteorological drought.
Introduction
Drought is defined as a phenomenon that may be related to the area under investigation and should be addressed using a specific application.The region can faces environmental, economic, and social challenges and so, many definitions of the dangerous drought have been developed [1].During a drought, a lack of moisture usually results in a severe hydrological imbalance.The areas can had also experienced dry weather and long-term water scarcity due to water scarcity.According to Hagman (1984), drought is the most common natural disaster [2].The event are the most complex of all-natural disasters that have affected man, but the nature of drought has been described as the event observed in a specific period and circumstances.Every year, various regions of the world are affected by the drought [3,4].Considering the severity, duration, and effects of the drought, there are certaindroughttypes;meteorologicaldrought,agriculturaldrought,hydrologicaldrought,andsocioeconomic drought [5].Meteorological drought is defined by the severity and duration of the drought.Depending on rainfall data, it is the first type of drought we encounter.Because its effect is dependent on rainfall, the period of rainfall corresponds to an average level in that drought type.Because the climate regime of the regions is an important factor, meteorological droughts vary in different locations.Taking two regions with different precipitation amounts as an example, the average annual rainfall in the first region is estimated to be 500 mm/year for longer years.In contrast, the annual rainfall in the second region is estimated to be around 1500 mm/year.If the amount of rainfall in the region in the same year is 750 mm/year, then the first region is experiencing a humid year, while the second region is experiencing a dry year.The main reason for this is due to atmospheric conditions that caused a lack of precipitation based on the climatic regime.Furthermore, meteorological drought had recorded monthly rainfall data.It is assessed on a seasonal, water-year, and annual time scale [6].As a result, the researcher observed the significant socioeconomic impact of such frequent changes.
Agricultural drought is investigated due to a lack of rainfall due to meteorological drought and soil water deterioration.The water demand of a plant is determined by its biological properties, as well as the growth or stages of the soil's physical and biological properties [7].
A lack of water in the hydrological system was referred to as a hydrological drought.It was a type of drought in which water levels in rivers, lakes, reservoirs, and groundwater were unusually low [8].
The hydrological drought indicated that the total flow of the dry year was lower than the previous year's average flow.Furthermore, the frequency and severity of hydrological drought are typically defined at the river basin scale.Hydrological drought is considered to be ongoing if the actual flow in a river during a specified time period falls below a certain threshold.As a result, the effects of hydrological drought upstream of a river basin may reduce downstream flow and vice versa [9].
For example, after years of severe drought in a river basin, many years of normal rainfall were required to replenish the reservoirs.Therefore, the socioeconomic drought occurred due to meteorological, hydrological, and agricultural drought factors being linked to the supply and demand for certain economic goods or services.Water, food, and hydroelectric energy supply, for example, are all affected by weather conditions.In most cases, demand for these goods is increasing due to rising per capita consumption.As a result, droughts are typically caused by an increase in demand for supply goods and a decrease in climate factors [10].
Gümüş (2017) used the Stream Arid Index approach to conduct drought studies in the Asi Basin.
Data from four flow monitoring sites between 1954 and 2005 were analyzed in the research.In the basin, the flow drought index calculations for the 3, 6, and 12-month time series indicated the dry, humid, and wet times of year.There were many more broken years between 1980 and 2005 than in prior years, according to the statistics.In addition, 2000 and 2001 were shown to be very dry years.[11].
Gümüş ve diğ., (2019),by the Ceyhan-Ceyhan area of Turkey conducted a study aimed to investigate the meteorological and hydrological drought, Standardized Precipitation Index, the method (SPI) and Current Drought Index (SDI) method in the region that was favored and methods gave similar results It is stated that they help to understand the drought better [12].
Bakanoğulları (2020) used SPEI (Standardized Precipitation Evapo transpiration Index) and SPI
Indices to analyze the Istanbul-Damlca Stream Basin droughts.When determining drought frequency and severity in the study, researchers used the (SPEI) and (SPI).The Thornthwaite technique was used to determine the SPEI drought index evapotranspiration in the basin between 1982 and 2006, using meteorological data.It is statistically noteworthy that the coefficient of determination (R2) between the yearly SPEI and SPI indices (0.977) is substantial.Drought patterns differed across the one, three, and six-month time frames, however.Study findings show a more accurate SPEI Drought Index regarding agricultural productivity and its usage is healthier [13].
In research by Coşkun (2020), a long-term precipitation trend analysis in the Van Lake basin was performed.The long-term recorded precipitation data from Van-Bölge, Muradiye, Erçiş, Gevaş, zalp, Tatvan, and Ahlat meteorological stations were used to evaluate both yearly and seasonal patterns in precipitation.Gevaş and Ahlat stations have declined yearly precipitation since the MannKendall Test, Spearman's Rho, and en Tests were used to analyze the data.Van-District station had an increase in annual precipitation.However, this time the rise was negligible.Erciş and Ahlat stations have had a considerable fall in precipitation, whereas Van-Region has seen a slight rise [14].
Material and Method
This study presented the methodology used to collect and analyze the data of the study.
The scope of the study is to determine drought sensitivity and calculate drought years to show the driest year in the western Black Sea region with the help of data from 8 precipitation and 4 flow observation stations data.In order to track changes in drought index values through time, drought analysis was used.The Standard Precipitation Index (SPI), the Reconnaissance Drought Index (RDI) and the Stream flow Drought Index (SDI) used for this aim.Sinop, Kastamonu and Bartın provinces in the Western Black Sea region was selected for research due to sometimes the scarcity of precipitation.Analyzes for meteorological and hydrological drought analysis were performed with the help of data between 1969-2019, and 1965-2915, respectively.Missing precipitation, temperature and flow data were completed with regression analysis.
Methodology
A meteorological and hydrological drought analysis will be conducted for 8 precipitation and 4 flow observation stations data in the study area.Missing data were completed with regression analysis.The Standard Precipitation Index (SPI) method is used to determine meteorological drought using monthly total rainfall data.The Reconnaissance Drought Index (RDI) method used to determine meteorological drought using average monthly temperature data and total monthly precipitation data.Also The Stream flow Drought Index (SDI)method is used to determine hydrological drought using monthly mean flow data A drought analysis were performed using Drin C software at the study areas for 1, 3, 6, and 12 months.
Data
Monthly total precipitation and mean temperature data for stations 8 in the Sinop, Kastamonu and Bartın provinces were obtained from the General Directorate of Meteorology and monthly mean flow data for stations 4 in the study area were obtained from the General Directorate of State Hydraulic Works.The average annual temperatures, the total annual averages of precipitation, and the average flow data obtained from these stations are shown in Table 1.
Table 1.Precipitation and flow monitoring stations and its geographic locations.The Standard Precipitation Index (SPI) was developed by (McKee et al. 1993).to determine the effects of the reduction in precipitation on groundwater, reservoir storage, soil moisture, snow drifts, and streams.It is obtained by dividing the precipitation difference from the mean, which is converted to normal distribution within the specified time period by the standard deviation.In fact, SPI provides a standardized conversion of the observed precipitation probability and could be calculated for desired time periods such as 1, 3, 6, 9, 12, 24, and 48 months.The formula and classification of the method are given below [15] (1)
Standardized Precipitation Index
Where: ThereconnaissanceDroughtIndex(RDI)isdevelopedtoapproachthewaterdeficitinamoreaccurateway,asaso rtofbalancebetweeninputandoutputinawatersystem [16].The initial value (α k) of RDI was calculated for the year on a time basis of k (months) as follows: (2) Where: ( Where: y (i) is the ln(ak) (i) y is its arithmetic mean y is the standarddeviation of y respectively.The process used the values of the total monthly precipitation, the average monthly temperature and mean monthly discharge for total 12 monitoring stations of the Sinop, Kastamonu and Bartın provinces.
Bartin Precipitation Monitoring station (17020) meteorological drought analysis (SPI)
SPI values were examined during periods 1, 3, 6, and 12 months using monthly total precipitation data measured continuously between 1965 -2019 of the Bartın station.Figure 1 and Figure 2 show the rates of dryness and humidity of monthly, 3-, 6-, and 12-months, respectively.Figure 1 show that the monthly dryness ranged between 45% and 59% according to SPI values.The highest dry period is 59% in Nov and Feb, with the lowest dry 45% period in Mar.When analyzed, the wet periods were the period with a high moist of 41% in Nov, Feb.The periods of drought and moisture for each of 3-, 6-, and 12-months for the SPI values are shown in Figure 3.2.The most.Dry periods with the highest SPI -3 values are SPI3-3 in Oct of 59%, and the lowest dry period is SPI3-3 Jan with 48%.For the periods SPI3-in JAN and SPI6-in April, the droughts were 52% and 55%, respectively.SPI-12 calculated according to 12-month values dryness is 50%, and moisture is 50%.
Nov
Dry moist The driest periods with the highest SPI -3 values are SPI3-2 in Oct of 58%, and the lowest dry period is SPI3-3 Apr with 50%.For the two periods of SPI of every six months SPI-6, the dry period was SPI6-1 October and SPI6-2 April with 52%.SPI-12 calculated according to 12-month values dryness is 56%, and moisture is 44%.The driest periods with the highest SPI -3 values are SPI3-1 in Oct of 59%, and the lowest dry period is SPI3-3 Apr with 50%.For the two periods of SPI of every six months SPI-6, the dry period was SPI6-1 October 54% and SPI6-2 April with 50%.SPI-12 calculated according to 12-month values dryness is 56%, and moisture is 44%.
and PETij are the precipitation and potential e-vapo transpiration of the j-thmonthoftheithyear Nis the total number of years of the available data The values of αk satisfactorily follow both the lognormal and the gamma distributions in a wide range of locations and different time scales, in which they were tested.
2 . 5 .
The Stream flow Drought IndexThe index is a hydrological drought analysis.According toNalbantis and Tsakiris (2009), if a time series of monthly stream flow volumes Qi,j is available, in which i denotes the hydrological year and j the month within that hydrological year (j = 1 for October and j = 12 for September), Vi,k can be obtained based on the equation [17].(4) in which Vi,k is the cumulative stream flow volume for the i-th hydrological year and the k-th reference period, k = 1 for October-December, k = 2 for October-March, k = 3 for October-June, and k = 4 for October-September.Based on the cumulative stream flow volumes Vi,k, the Stream flow Drought Index (SDI) is defined for each reference period k of the i-th hydrological year as follows[18] Table 3 .Definition of states of hydrological drought with the aid of SDI [34] of this research, SPI, RDI and SDI values for 1, 3, 6, and 12 months were calculated and evaluated using the Precipitation, Reconnaissance and Stream flow Drought Index method
Figure 1 .
Figure 1.Dry -moist period distributions according to the monthly SPI values for the BARTINstation (No. 17020)Form 1965 to 2019.
Figure. 3 .
Figure.3.Dry -moist period distributions according to the monthly SPI values for the KASTAMONU station (No. 17074)Form 1965 to 2019.
Figure 5 .Figure 6 .
Figure 5. Dry -moist period distributions according to the monthly SPI values for the SINOP station No. 17026Form 1965 to 2019. | 2024-02-22T16:08:47.201Z | 2023-09-07T00:00:00.000 | {
"year": 2023,
"sha1": "f4898c8f97877f5cfd84b174db4bca7510886ab4",
"oa_license": "CCBY",
"oa_url": "https://jhas-bwu.com/index.php/bwjhas/article/download/118/92",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0c0b75d05d472dfaced39d951b7851f432f3eb9a",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
85642808 | pes2o/s2orc | v3-fos-license | Biochar mineralization and priming effect on SOM decomposition in two European short rotation coppices
As studies on biochar stability in field conditions are very scarce, the carbon sequestration potential of biochar application to agricultural soils remains uncertain. This study assessed the stability of biochar in field conditions, the effect of plant roots on biochar stability and the effect of biochar on original soil organic matter (SOM) decomposition in two (Italy and United Kingdom) short rotation coppice systems (SRCs), using continuous soil respiration monitoring and periodic isotopic (δ13CO2) measurements. When root growth was excluded, only 7% and 3% of the biochar carbon added was decomposed after 245 and 164 days in Italy and United Kingdom sites respectively. In the presence of roots, this percentage was increased to 9% and 8%, suggesting a small positive priming effect of roots on biochar decomposition. A decreased decomposition rate of original SOM was observed at both sites after biochar incorporation, suggesting a protective effect of biochar on SOM. This study supports the carbon sequestration potential of biochar and highlights the role of root activity on biochar decomposition, questioning the applicability of laboratory incubation studies to assess biochar stability.
Introduction
Biochar is a carbon-rich material produced from pyrolysis or gasification of biomass under low oxygen conditions (Lehmann, 2007). Biochar application to agricultural soils has been proposed as a promising strategy for carbon (C) sequestration (Lehmann, 2007) and climate change mitigation. However, the potential of biochar to improve soil C-sink is still under debate, since its stability seems to depend on several factors, such as the starting feedstock, pyrolysis conditions, soil environment and vegetation cover of the site (Hilscher & Knicker, 2011). Several short-term incubation experiments (Hamer et al., 2004;Cheng et al., 2008a;Kuzyakov et al., 2009;Zimmerman, 2010) suggest centennial or millennial mean residence times for biochar stability. Other authors found a faster decomposition rate (Bird et al., 1999;Nguyen & Lehmann, 2009;Zimmermann et al., 2012) and a rapid transformation of biochar by abiotic and biotic oxidation (Hamer et al., 2004;Bruun et al., 2008;Hilscher et al., 2009;Hilscher & Knicker, 2011;Zimmermann et al., 2012). Contrasting results also exist on the effect of biochar on the stability of native soil organic matter (SOM) (Kuzyakov et al., 2000). Some studies reported a stimulation (Wardle et al., 2008;Cross & Sohi, 2011;Zimmerman et al., 2011) and others no effect or inhibition (Kuzyakov et al., 2009;Novak et al., 2010;Spokas, 2010;Singh & Cowie, 2014) of native SOM decomposition after biochar addition to soil. Climate, especially temperature, can strongly affect both biochar and SOM decomposition rate (Cheng et al., 2008a;Nguyen et al., 2010). Zimmerman et al. (2011) reported that the direction (positive or negative) of the priming effect and its magnitude depend on soil and biochar type, ranging from À52 to 89% 1 year after soil biochar application.
Most of the experiments on biochar stability are based on short-term lab incubations while field studies remain scarce (Jones et al., 2012;Gurwick et al., 2013). There-fore, little is known about the interactions between biochar and roots and the related effects on biochar stability (Ventura et al., 2013).
The overall aim of this study was to assess, the C sequestration potential of biochar under field conditions. In particular, we aimed to assess: (i) the stability of biochar in field conditions; (ii) the effect of biochar application on SOM decomposition (iii) the effect of plant roots on biochar stability. To achieve these aims, two different field experiments were carried out in Italy and the United Kingdom in two short rotation coppice systems (SRCs). This choice was made because of the increasing importance of SRC as one of the most efficient agricultural systems to meet European greenhouse gas reduction targets (Don et al., 2012). Moreover, the use of biochar in agriculture for food production is still debated, because of the high content of polycyclic aromatic hydrocarbons (PAHs), that could have negative impacts on the soil biota and human health (Brown et al., 2006), if translocated to the edible part of the plant and also because of possible negative effects of biochar on plant defence chemistry (Viger et al., 2014). Biochar application to nonfood bioenergy crop systems avoids this issue of toxicity and at the same time focuses on a land use change that may be of wide significance in the future.
Experimental sites
In Italy, the experimental field was set up in a poplar (Populus x Canadensis M€ onch, Oudemberg genotype) SRC plantation with a 2-year rotation period, located in Prato Sesia (Novara) (45°39 0 32.2812″ N; 8°21 0 16.8339″ E). The plantation was established in the spring of 2010 with a density of 6600 trees ha À1 in single rows with a 3 m distance between rows and 0.5 m between plants on the row. Coppicing was undertaken in the spring of 2012 before biochar application. The soil is sandy loam (12% clay, 34% silt, 54% sand) with a pH of 5.4. Climate is temperate with an average annual temperature of 12°C and an average annual precipitation of 1500 mm.
The UK experimental site consists of a SRC plantation of mixed willow genotypes (Salix spp.), located at Pulborough, West Sussex, UK (50°57′ N; 0°30′ W). The plantation was established in 2008 with a density of 15 000 trees ha À1 in double rows with alternating distances of 0.75 and 1.4 m between the rows and 0.55 m in the row. The soil is silt loam (7% clay, 53% silt, 40% sand) with a pH of 5.5. Climate is temperate with an average annual temperature of 11°C and an average annual precipitation of 800 mm. Coppicing was performed in the spring of 2009.
Biochar application
The biochar used in the two experiments was produced from maize (Zea mays L.) silage feedstock pellets at 1200°C under atmospheric pressure with a residence time of 40 min in a gasification plant (©A.G.T. -Advanced Gasification Technology s.r.l., Cremona, Italy). Table 1 reports the main physicochemical characteristics of the biochar used in the experiments. Biochar C, N and H contents were determined by CHN elemental analyzer (Flash EA 2000 Thermo Fisher Scientific, Bremen, Germany). Nutrient content was determined with an inductively coupled plasma optical emission spectrometer (ICP-OES), after mineralization with an Ethos TC microwave labstation (Milestone, Bergamo, Italy). Biochar d 13 C was determined with a continuous flow isotopic ratio mass spectrometer (CF-IRMS; Delta V Advantage, Thermo Fisher Scientific, Bremen, Germany).
A completely randomized design with two treatments (biochar (B) and control (C)) and four replicates (plots) per treatment was used at both sites. Plots (45 m 2 each) were designed to include three plant rows and nine plants per row. Biochar (30 t ha À1 ) was incorporated into the first 15-cm soil layer by rotary hoeing on March 30th 2012 at the Italian site and on June 19th 2012 by hand at the UK site. To disturb the soil evenly in control and biochar-treated plots, hoeing or digging was carried out in both control and treatment plots.
Soil respiration measurements
Total and heterotrophic soil respiration was measured in biochar-treated (R tot B and R h B, respectively) and control plots (R tot and R h , respectively). The trenching method was used to measure R h and R h B (Hanson et al., 2000). On February 2012, at At each site, soil CO 2 efflux was measured in three of the four plots per treatment using a closed dynamic soil respiration system with 12 automated chambers (Delle Vedove et al., 2010) every 2 and 4 h at the Italian and UK site, respectively. The difference in sampling frequency was related to the different power supply available at each site (i.e. AC for Italy and batteries with solar panels for United Kingdom). The system uses the rate of increase in CO 2 within the chamber to estimate the rate at which CO 2 diffuses into free air outside the chamber. To minimize the underestimation of the efflux due to the alteration of the diffusion gradient, we used nonlinear curve fitting. Specifically, after chamber lid closure, when a steady gas mixing within the chamber was established (typically after 30-40 s) a nonlinear regression between CO 2 concentration and time was performed (Delle Vedove et al., 2010). CO 2 concentration data were plotted against time for each chamber and measurement. CO 2 concentration trends were checked for their curvature and final CO 2 value. If curvature was not convex and/or the difference between initial and final CO 2 concentrations was below 3 ppm, final computed fluxes were discarded. When it was impossible to estimate nonlinear regression parameters because of the linearity of CO 2 increase with time, a linear model was then used. If the coefficient of regression (R 2 ) was below 0.90, computed fluxes were discarded.
One chamber was installed in each plot to measure R tot and another chamber was installed on each root exclusion subplot, to measure R h . The chambers were installed in the central part of each plot, in the middle between two tree rows, and placed on stainless-steel collars (20 cm diameter, 8 cm height) inserted for 4 cm into soil. The exact volume was calculated for each chamber after insertion into soil, by measuring chamber height from soil surface.
Soil temperature (T) at 10 cm depth and soil water content (SWC) between 0 and 18 cm were recorded every 30 min using temperature probes, (107, Campbell Scientific, Logan UT, USA) and water content reflectometers (CS-616, Campbell Scientific, Logan UT, USA), respectively. The probes were installed close to soil respiration chambers. In the Italian site, soil temperature and SWC were measured in each plot and trenched subplot. In the UK site, soil temperature was measured in trenched and untrenched soil, while SWC was measured only in the untrenched soil. A meteorological station was available at each field site for measurements of air temperature and humidity (CS215 Probe, Campbell Scientific, Logan, UT, USA) and rainfall (52202 Tipping Bucket Rain gauge, R. M. Young Company, Traverse City, Michigan, USA). Soil respiration measures were averaged on a daily basis per plot, subplot (trenched and untrenched) and treatment (B and C).
Only daily averages based on at least six of the 12 measurements (Italy) and three of the six measurements (United Kingdom) valid soil respiration measurements per day and on at least three replicates per treatment (B and C) and subplot (trenched and untrenched) were considered. Distribution of daily flux coefficient of variation (CV) was computed by treatment and subplot. Soil respiration daily fluxes with CV higher than 90th percentile were discarded.
Missing or discarded data were gap-filled according to the following model proposed by Qi & Xu (2001): where R is the soil CO 2 efflux (total or heterotrophic), T is the soil temperature (°C) and SWC is the soil water content (%). In the UK site, SWC data were not available in the period from June 19, 2012 to August 14, 2012. For this short-time period, gap-filling of soil respiration data was done according to a single exponential model (R=aT b ) The models were parameterized using soil temperature and soil moisture data collected in the experimental sites, determining parameters a, b and c for each site and soil respiration chamber by a nonlinear regression analysis. Finally, cumulative soil respiration fluxes were calculated for each chamber over the whole experimental period. All the CO 2 fluxes were expressed as g C m À2 .
Isotopic measurements and Keeling plots
The isotopic signature (d 13 C) of the respired CO 2 was periodically assessed using the Keeling plot method (Ngao et al., 2005;Joos et al., 2008). Manual sampling of respired CO 2 followed by isotopic ratio mass spectrometer analysis (IRMS) and direct online sampling using a Picarro G2131-i d 13 C High-precision Isotopic CO 2 Cavity Ring Down Spectrometer (CRDS) were used and compared at both sites.
Manual sampling was performed using a portable infrared gas analyzer (IRGA, EGM 4, PP-Systems, Amesbury, MA, USA) connected to a closed dynamic chamber (SRC 1, PP Systems) and to a set of eight three-way valves (Fig. 1). The valves allowed air to circulate through four glass vials (12 ml exetainer gas vials Labco Ltd., Lampeter, Ceredigion, UK) or alternatively bypass them. At the beginning of the sampling cycle, all four vials where connected to the circuit and the valves were opened to allow the air to circulate through all of them. The chamber was placed on the soil surface and the four vials were filled sequentially during the accumulation of the respired CO 2 into the system. Before collecting each vial, they were isolated from the circuit by closing the corresponding valves. CO 2 concentration and time since the beginning of the measurement cycle were also recorded. A minimum range of 300 ppm between the first and the last air sampling was kept to properly calculate d 13 C of soil-respired CO 2 using the Keeling plot method (Joos et al., 2008). For each sampling cycle, the chamber was kept on the soil for a time ranging from 10 to 20 min, depending on soil CO 2 emission rate. Three sampling cycles were performed in three different positions around each automated soil respiration chamber (for a total of 36 keeling plots per sampling day, nine per treatment). The collected vials were then analysed in the lab for d 13 CO 2 value with a continuous flow isotopic ratio mass spectrometer (CF-IRMS; Delta V Advantage, Thermo Fisher Scientific, Bremen, Germany) coupled with a gas purification device (Gas-Bench II; Thermo Fisher Scientific, Bremen, Germany).
The CRDS measurements were performed by connecting the analyzer to the automated soil respiration system to subsample the circulating air. Instantaneous CO 2 concentration and d 13 CO 2 were recorded every 1 s by the CRDS in the CO 2 concentration range between 500 and 1200 ppm. Measurement cycles, lasting 10-20 min each, were repeated in three different positions in the soil around each automated soil respiration chamber on each sampling day.
At the Italian site, monthly eight manual and two CRDS sampling campaigns were performed between April and November 2012. At the UK site, two CRDS measurements (on August 14, 2012 and October 2, 2012) and one manual sampling (on March 7, 2012) were performed. Preliminary tests ( Figure S1) in a d 13 CO 2 range between À28% and À19& showed a good correlation (R = 0.85, P < 0.05) between the two methods (IRMS manual sampling vs. on-line CRDS measurements). According to major axis regression analysis, the intercept and slope of the regression line were not significantly different from 0 and 1, respectively. Regardless, the difference we detected between IRMS manual sampling and online CRDS measurements could be related to pressure anomalies within the chamber during CRDS sampling. This effect could have led to biases in the d 13 CO 2 of soil respiration by affecting the ratio of 13 CO 2 to 12 CO 2 that diffused across the soil-atmosphere interface. However, we were not able to quantify such an error as there is no theoretical analysis available for such an effect presently (Takahashi & Liang, 2007). Therefore, the following equation was used to convert CRDS in IRMS data: where d 13 CO 2 CRDS and d 13 CO 2 IRMS and are the isotopic signatures of CRDS-and IRMS-derived data, respectively.
Biochar decomposition and priming effect on SOM
The fraction of CO 2 respiration derived from the biochar decomposition (f B ) was calculated for both R tot B and R h B using a mass balance approach according to Phillips & Gregg (2001): where d 13 CO 2B and d 13 CO 2SOM are the isotopic signatures of the CO 2 emitted from B and C, respectively, and d 13 C B is the isotopic signature of biochar (d 13 C = À13.8&).
Assuming a linear variation in f B between two Keeling plot sampling dates, the daily biochar-derived CO 2 fluxes were obtained multiplying f B by the daily soil CO 2 fluxes of R h B and R tot B. Cumulative biochar-derived CO 2 flux was then calculated over the whole experimental period for both trenched (R h-biochar-derived ) and untrenched subplots (R tot-biochar-derived ) by summing the single daily biochar-derived CO 2 fluxes. R h-biocharderived and R tot-biochar-derived were then used to estimate the amount of C remaining in comparison with that was originally present in the biochar matrix. The priming effect of root activity (P eff-root ) on biochar decomposition was calculated as follows: In biochar-treated plots, the cumulative flux derived from the decomposition of native soil organic matter (R h-SOM-derived ) was calculated as follows: The priming effect of biochar (P eff-biochar ) on SOM decomposition was calculated only in the trenched plots as follows:
Statistical analysis
All statistical analysis, soil respiration elaborations and flux computations were performed in STATA 10.1 (© StataCorp, College Station, TX, USA). Cumulative soil respiration fluxes Fig. 1 Scheme of the sampling system used to collect CO 2 emitted from the soil to calculate d 13 C of soil-respired CO 2 by Keeling plot method. Arrows indicate the movement of the air through the system. measured on biochar-treated and control plots were compared using analysis of variance (ANOVA), considering biochar application, trenching treatments and their interaction. Similarly, d 13 C of the respired CO 2 , for each single date, were compared by ANOVA. Homogeneity of variance was checked before analysis. Intercepts of the Keeling plots were calculated using least squares linear regression. Major axis regression was used to compare results from Keeling plots obtained with manual sampling and CRDS measurements. Soil temperature and water content data were compared by repeated measures ANOVA using SigmaPlot 12 (Systat Software, Inc., USA).
Results
The model used to gapfill the missing soil respiration data on the base of soil temperature and water content showed a high predictive capacity (R 2 = 0.72 for the Italian site and 0.90 for the UK site, on average), which allowed us to recover most of the missing data. The model including only soil temperature showed a lower R 2 (0.42 on average). However, this model was used to gap-fill data only for a short-time period. Including both measured and gap-filled data, the dataset accounted for the 94% and 92% of the expected daily data for the Italian and UK sites, respectively. Residual gaps were due to power failure, which did not allow us to record soil water content and soil temperature data. Biochar treatment significantly increased SWC in the two experimental sites (Figs 2c, d and 3b). In the Italian site, trenching affected SWC and soil temperature depending on the presence of biochar. In the summer period, when biochar was applied, SWC was significantly higher in the trenched than in control plots (Fig. 2d). In the same period, also soil temperature was affected by trenching in presence of biochar, being slightly but significantly lower in trenched than control plots (Fig. 2b). On the contrary, when biochar was not applied, SWC in the trenched plots was slightly but significantly lower than in untrenched soil (Fig. 2c). Daily total and heterotrophic respiration measured in control (R tot , R h ) and biochar-treated plots (R tot B and R h B) at both sites (Fig. 4) showed a typical annual variation with higher values in summer due to the higher soil temperature (Figs 2a, b and 3a). In Italy, the addition of biochar did not significantly affect the CO 2 flux in either trenched and untrenched subplots (P > 0.05; Table 2). In the UK site, R tot was significantly higher with biochar, while no effect of biochar was detected for R h ( Table 2).
The isotopic signature of the CO 2 efflux emitted from biochar-treated plots was always significantly greater than that of CO 2 emitted from control plots (Fig. 5), with the only exception of August 23, 2012 and May 8, 2012 (for R tot only) at the Italian site (Fig. 5c). Furthermore, no interaction was found between soil biochar application and root exclusion treatment on d 13 CO 2 efflux (Table S1). These two conditions were an essential prerequisite to apply the mass balance approach according to the two-source mixing model, where one source The percentage of soil respiration attributed to biochar (f B ) varied according to the site and sampling date (Fig. 6a, b). At the Italian site, it was between 7% and 36% with a clear seasonal trend especially in trenched plots (Fig. 6a). At the UK site, f B varied between 12% and 32% with higher values in summer than in spring (Fig. 6b). At both sites, f B was higher for R tot than for R h for most of the sampling dates (Fig. 6a, b). Thus, R tot-biochar-derived was higher than R h-biochar-derived (Table 2) and biochar decomposition curves were steeper in untrenched than trenched plots in both experimental sites (Fig. 7a,b). At the end of the experimental period, R h-biochar-derived accounted for the 7% and 3% of the carbon originally added by biochar application at the Italian and UK sites, respectively, while R tot-biochar-derived amounted to 9% and 8%, respectively (Fig. 7a, b).
Considering the different length of the experimental period at the two sites, in the trenched plots the daily degradation rate was higher at the Italian (0.0288 AE 0.0009% day À1 ) than at the UK site (0.014 AE 0.0003% day À1 ) ( Table 2), while in the presence of plant roots (control plots), the degradation rate was similar at both sites (0.039 AE 0.001 and 0.036 AE 0.001% day À1 at the UK and Italian site, respectively).
The P eff-root was +29 gC m À2 and +82 gC m À2 at Italian and UK site, respectively (Table 2). R h-SOM-derived amounted to 465 gC m À2 and 397 gC m À2 at the Italian and UK site, respectively, and at both sites it was lower than R h . Therefore, the P eff-biochar was À54 gC m À2 (10% of R h ) at the Italian site and À66 gC m À2 (14% of R h ) at the UK site (Table 2).
Discussion
Generally, the application of trenching increases SWC, because of the absence of plant water uptake in the trenched plots (Kuzyakov & Larionova, 2005). This effect was observed in the Italian site, in particular in the summer period, only when biochar was applied (Fig. 2d). In the same period, soil T was decreased in the trenched subplots probably because of an enhanced evaporation from the trenched plots (Fig. 2b).
Considering that we found a positive relationship between soil respiration and SWC in both sites, a higher SWC in trenched and biochar-treated plots probably led to an overestimation of R h B, and consequently to an underestimation of the P eff-biochar [Eqn (5)]. Nevertheless, the difference in SWC was so small and limited to a short-time period that the underestimation of negative priming effects was likely negligible. An increase in soil respiration after biochar addition has been previously observed in both lab (Kolb et al., 2009;Kuzyakov et al., 2009;Zimmerman, 2010;Cross & Sohi, 2011;Hale et al., 2011;Rogovska et al., 2011;Zavalloni et al., 2011) and field experiments (Jones et al., 2012;Ventura et al., 2013). This increase in soil CO 2 efflux has been related to the degradation of the labile fraction of biochar, such as bio-oils and condensation products (Thies & Rillig, 2009), or to the stimulation of SOM decomposition (Zimmerman, 2010;Luo et al., 2011). In the present study, at the Italian site, the cumulative CO 2 fluxes were not affected by soil biochar application (Table 2), since the CO 2 emission due to biochar decomposition was offset by a reduction in SOM decomposition. This result is in accordance with laboratory incubation studies under controlled conditions using the same biochar (Naisse et al., 2014). An increase in R tot cumulative flux was observed at the UK site (Table 2).
With the exception of an initial phase characterized by a low decomposition rate at Italian site, the dynamics of biochar degradation can be well described by a negative double exponential function (Fig. 7). This agrees with the conceptual model for degradation of fresh biochar (Zimmerman, 2010), (a) (b) Fig. 3 Soil temperature (a) and water content (b) measured in biochar-treated and untreated plots, in the UK site. Soil temperatures registered in the trenched subplots and unternched soil were pooled together because no differences were detected between the two treatments. Total rainfalls in the area is reported in plot b.
whereby the biochar would consist of two pools: an aliphatic portion that is more readily mineralized and an aromatic one that is oxidized more slowly. The initial phase with low biochar decomposition rate was also observed by Bai et al. (2013) and Hamer et al. (2004) in two laboratory incubation experiments and related to the time needed by microorganisms to colonize biochar before degrading it.
Both sites showed higher decomposition rates than those previously reported by Kuzyakov et al. (2009). Naisse et al. (2014 in an incubation experiment, found a lower decomposition rate for the same biochar used in the present study. Several factors could have enhanced biochar decomposition rate under field conditions in comparison with controlled lab conditions; among them the inputs of fresh organic matter from plants (Keith et al., 2011;Luo et al., 2011) and the frequent abrupt variations of SWC have been suggested (Nguyen et al., 2010). In the absence of plant roots, a higher biochar degradation rate was observed at the Italian site in comparison to the UK site (Fig. 7b). This could be explained by the different climatic conditions at the two sites, in particular to the higher soil temperature recorded in Italy (Figs 2 and 3). Mean annual temperature was suggested as one of the most important drivers of the natural oxidation of charcoal in soil (Glaser & Amelung, 2003;Cheng et al., 2008b).
The higher contribution of biochar-derived respiration to total CO 2 efflux (Fig. 6) and the higher decomposition rate in presence of roots ( Fig. 7; Table 2) suggest a priming effect of roots on biochar decomposition. Many authors have found that root activity can stimulate SOM degradation (Kuzyakov, 2002;Schweinsbergmickan et al., 2012;Pausch et al., 2013), although the underpinning mechanisms have not yet been completely clarified. It is likely that a combination of these mechanisms could have affected biochar decomposition as well. In fact, biochar decomposition has been shown to be higher after the addition of labile substrates such as glucose (Hamer et al., 2004;Nocentini et al., 2010) or fresh organic matter (Keith et al., 2011;Luo et al., 2011). This effect has been explained by the cometabolism concept, whereby the stimulation of microbial growth and enzyme production induced by the added substrates would increase the biochar decomposition (Hamer et al., 2004). Plants can strongly influence the structure of soil microbial communities, and differentiate the rhizosphere microbial community from that of the surrounding soil (Bulgarelli et al., 2013). Therefore, the rootinduced priming effect on biochar decomposition could be due to a shift towards a more efficient biochar- Fig. 4 Daily total (R tot ) and heterotrophic (R h ) soil respiration fluxes in control (a, b) and biochar (c, d) treatments for Italian and UK sites, respectively. Biochar was applied on March 30th and June 19th in Italy and UK, respectively. decomposing microbial community. The different plant species and rhizodeposits could explain why in United Kingdom the decomposition rate was higher than in Italy, notwithstanding the lower temperature recorded in the latter site.
The present study showed a small decrease in SOM decomposition after soil biochar addition (Table 2). This negative priming effect was also observed during a laboratory incubation with the same biochar (Naisse et al., 2014). Also Liang et al. (2010) found a decrease in mineralization of added organic matter in a biochar-rich Amazonian Anthrosol. Similarly, Cross & Sohi (2011) found an inhibition of SOM decomposition during biochar incubation experiments with two different soils. Zimmerman et al. (2011), studying biochars produced from different feedstocks at different temperatures, observed a SOM protection effect of high-temperature biochars. It is well known that biochar surfaces and pores can absorb SOM molecules and protect them from decomposers . The adsorption affinity of biochar surfaces for SOM have been shown to increase with increased charring temperature and to be higher in grass biochar in comparison with wood biochar (Kasozi et al., 2010). As we used a high-temperature biochar (1200°C), we can suppose a high surface affinity of our biochar with SOM and consequently a high protective potential against biotic and abiotic oxidation. However, this protective effect of gasification char is likely to be short-lived and decrease after physical weathering during prolonged field exposure (Naisse et al., 2014). Biochars produced at high temperatures have a high microporosity, which has been suggested to play a role in the inhibition of the SOM mineralization (Brewer et al., 2011(Brewer et al., , 2014. Micropores may in fact be less accessible by microorganisms and protect absorbed organic matter against the microbial degradation (Ameloot et al., 2013). Without a robust evidence base of field data, evaluating the carbon mitigation potential of biochar technology, its diffusion and social acceptance is not justified.
In this framework, multi-site field experiments aimed at assessing the biochar stability in field condition are crucial.
In the present article, regardless of the experimental site, biochar showed low decomposition rates and a protection effect on original SOM, confirming the carbon mitigation potential of this technology. However, the mechanisms that are behind the protective effect of biochar on SOM decomposition deserve to be investigated more deeply. Our field study showed that the presence of plant roots has a crucial effect on biochar stability through their priming effect. Therefore, laboratory incubations may overestimate the C sequestration potential of biochar. Similarly, as the positive priming effect of roots on biochar degradation could reduce or compromise the C-sink potential of biochar technology in a long-term perspective, the interaction between root activity and biochar stability has to be studied in depth and in long-term field experiments. The study of the change in microbial community induced by plant roots could be the key to understand the mechanisms underlying the observed priming effects. | 2019-03-30T13:05:57.058Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "243fc724b6ea31cdacfeb1e0c4c34e97cc049871",
"oa_license": "CCBY",
"oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/gcbb.12219",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "8f9991747548422535f48762e8d353482c80fe92",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
1551150 | pes2o/s2orc | v3-fos-license | Human Pathogens in Body and Head Lice
Using polymerase chain reaction and sequencing, we investigated the prevalence of Rickettsia prowazekii, Bartonella quintana, and Borrelia recurrentis in 841 body lice collected from various countries. We detected R. prowazekii in body lice from Burundi in 1997 and in lice from Burundi and Rwanda in 2001; B. quintana infections of body lice were widespread. We did not detect B. recurrentis in any lice.
fever in refugees (7,10). From April 1997 to December 1998, after our reports, a new strategy was designed to control typhus and trench fever. Health workers treated any patient with fever >38.5°C with a single dose of doxycycline (200 mg), a drug highly effective in the treatment of typhus (7). The program proved extremely successful, and in a follow-up in 1998 (10) we did not detect R. prowazekii in body lice collected in refugee camps in the country (Table 1).
Since 1998, we have continued our efforts and have collected 841 body lice obtained by medical staff from our laboratory or local investigators in Burundi, Rwanda, France, Tunisia, Algeria, Russia, Peru, China, Thailand, Australia, Zimbabwe, and the Netherlands (Table 1). In Burundi, lice were collected during the outbreak of epidemic typhus and on three occasions (1998, 2000, and 2001) after the outbreak had been controlled. Lice found on any part of the body, except the head and pubis, were regarded as body lice. The lice were transported to France in sealed, preservative-free, plastic tubes at room temperature. Delays between collection and analysis ranged from 1 day to 6 months. As negative controls, we used specific pathogen-free laboratory-raised body lice (Pediculus humanus corporis strain Orlando). To prevent contamination problems, as positive controls we used DNA from R. rickettsii R (ATCC VR-891), Bartonella elizabethae F9251 (ATCC 49927), and Borrelia burgdorferi B31 (ATCC 35210), which would react with the primer pairs we used in our PCRs but give sequences distinct from the organisms under investigation. To prevent false-positive reactions from surface contaminants, each louse was immersed for 5 min in a solution of 70% ethanol-0.2% iodine before DNA extraction and then washed for 5 min in sterile distilled water. After each louse was crushed individually in a sterile Eppendorf tube with the tip of a sterile pipette, DNA was extracted by using the QIAamp Tissue Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. This kit was also used to extract DNA from the organisms cultivated in our laboratory under standard conditions to be used as positive controls. The effectiveness of the DNA extraction procedure and the absence of PCR inhibitors were determined by PCR with broad-range 18S rDNAderived primers (10).
To detect louse-transmitted pathogens, we used each of the genus-specific primer pairs described in Table 2 in a separate assay. A total of 2.5 mL of the extracted DNA was used for DNA amplification as previously described (10). PCRs were carried out in a Peltier Thermal Cycler PTC-200 (MJ Research, Inc., Watertown, MA). PCR products were resolved by electrophoresis in 1% agarose gels. All lice yielded positive PCR products when amplified with the 18S rRNA-derived primers, demonstrating the absence of PCR inhibitors. Negative controls always failed to yield detectable PCR products, whereas positive controls always gave expected PCR products. PCR amplicons were purified by using the QIAquick Spin PCR purification kit (Qiagen) and sequenced using the dRhodamine Terminator cycle-sequencing ready reaction kit (PE Applied Biosystems, Les Ulis, France), according to the *Université des la Méditerranée, Marseille Cedex, France; †Centre Hospitalier Universitaire, Bujumbura, Burundi; ‡Médecins sans frontière, Marseille, France; and §Ross University, St. Kitts, West Indies T manufacturer's recommendations. Sequences obtained were compared with those in the GenBank DNA database by using the program BLAST (14).
The sequences of the DNA amplicons we obtained were identical to those of R. prowazekii and B. quintana in Gen-Bank. We detected R. prowazekii in body lice collected in Burundi in 2001 but not in those collected in 1998 and 2000, although they were positive for B. quintana. R. prowazekii was also detected in 7% of lice collected in Rwanda. We found B. quintana in body lice collected in France, the Netherlands, Russia, Burundi, Rwanda, Zimbabwe, and Peru. No PCR products were obtained for any of the lice when primer pair Bf1-Br1 was used, indicating lack of infections with Borrelia recurrentis.
Our PCR may greatly facilitate the study of lice and louseborne diseases as it can be used to survey lice for these organisms, detect infected patients, estimate the risk for outbreaks, follow the progress of epidemics, and justify the implementa-
DISPATCHES
tion of controls to prevent the spread of infections. We have successfully applied the PCR assay to lice from homeless and economically deprived persons in inner cities of developed countries and found high prevalences of Bartonella quintana infections (3,5,6). Furthermore, we have emphasized the risk of R. prowazekii outbreaks in Europe, based on our findings of an outbreak of epidemic typhus in Russia, a case of Brill-Zinsser disease in France (15), and a case of epidemic typhus imported from Algeria (9). The PCR assay on lice may help detect outbreaks. In recent epidemics of louse-borne infections, the prevalence of body louse infestations in persons has reached 90% to 100% before clinical signs of louse-borne disease were noted in the population (16). Experience has shown that the emergence and dissemination of body lice can be very rapid when conditions are favorable (17). In Central Africa, large outbreaks of lice infections occurred during civil wars in Burundi, Rwanda, and Zaire (16) and preceded the outbreak of epidemic typhus by 2 years (7). We clearly demonstrate the potential for further outbreaks of louse-borne diseases in Africa. Although lice from Burundi were negative for R. prowazekii in 1998 and 2000 as a result of the administration of doxycycline to patients, the persistence of the vector enabled the spread of R. prowazekii from human carriers back into the louse population. In 2001, we found that 21% of lice from refugee camps in the same areas of Burundi as sampled earlier were positive by PCR for R. prowazekii. Further samples submitted to our laboratory indicate a typhus outbreak is currently developing in refugee camps in Burundi (unpub. data). We also found R. prowazekii in 7% of body lice collected in 2001 from a jail in Rwanda. That the country is now host to 300,000 refugees from the January 2002 eruption of the Nyiragongo volcano is thus a concern.
Although lice from the other areas studied were free from typhus, we found B. quintana to be widely distributed; it was detectable in lice from France, the Netherlands, Burundi, Zimbabwe, and Rwanda. We could not find the organism in lice from Australia, Tunisia, and Algeria, but only small numbers of lice from these areas were studied. As with R. prowazekii, chronic bacteremia occurs with B. quintana infection in humans; the only way to eradicate the organism is to eliminate body lice. We were not able to detect Borrelia recurrentis in any of the lice, which indicates that infection rates with this organism are very low or the agent is restricted to specific geographic zones.
Our study has demonstrated the usefulness of PCR of body lice in ongoing surveillance of louse-associated infections. When faced with outbreaks of body lice or to follow-up outbreaks of louse-borne infections, investigators should consider using PCR for R. prowazekii, Bartonella quintana, and Borrelia recurrentis in body lice collected from the study area and shipped to their laboratories. Our results from Burundi highlight the necessity for using combinations of methods to control body lice and hence R. prowazekii infections.
Dr. Fournier is a physician in the French reference center for the diagnosis and study of rickettsial diseases. His research interests include the physiopathologic, epidemiologic, and clinical features of rickettsioses. | 2014-10-01T00:00:00.000Z | 2002-12-01T00:00:00.000 | {
"year": 2002,
"sha1": "da082faa2a7526d6ddc31825e96a004fd5b92bd4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid0812.020111",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d30e82e80f5ded0f10119858d8c820b6a2c0f72",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
225243255 | pes2o/s2orc | v3-fos-license | Why Soot is not Alike Soot: A Molecular/Nanostructural Approach to Low Temperature Soot Oxidation
Due to worldwide increasingly sharpened emission regulations, the development of Gasoline Direct Injection and Diesel Direct Injection engines not only aims at the reduction of the emission of nitrogen oxides but also at the reduction of particulate emissions. Regarding present regulations, both tasks can be achieved solely with the help of exhaust after treatment systems. For the reduction of the emission of particulates, Gasoline (GPF) and diesel Particulate Filters (DPF) offer a solution and their implementation is intensely promoted. Under optimal conditions particulates retained on particulate filters are continuously oxidized with the exhaust residual oxygen so that the particulate filter (PF) is regenerated possibly without any additional intervention into the engine operating parameters. The regeneration behavior of PF depends on the reaction rates of soot particles with oxidative reactants at exhaust gas temperatures. The reaction rates of soot particles from internal combustion engines (ICE) often are discussed in terms of order/disorder on the particle nanoscale, the concentration and kind of functional groups on the particle surfaces, and the content of (mostly polycyclic aromatic) hydrocarbons in the soot. In this work the reactivity of different kinds of soot (soot from flames, soot from ICE, carbon black) under oxidation conditions representative for PF regeneration is investigated. Soot reactivity is determined in dynamic Temperature Programmed Oxidation (TPO) experiments and the soot primary particle morphology and nanostructure is investigated by High-Resolution Transmission Electron Microscopy (HRTEM). An image analysis method based on known methods from the literature and improving some infirmities is used to evaluate morphology and nanostructural characteristics. From this, primary particle size distributions, length and separation distance distributions as well as tortuosities of fringes within the primary particle structures are obtained. Further, UV–visible spectroscopy and Raman scattering and other diagnostic techniques are used to study the properties connected to the reactivity of soot and to corroborate the experimental findings. It is found that nanostructural characteristics predominantly affect reactivity. Oxidation rates are derived from TPO and interpreted on a molecular basis from quantum chemistry calculations revealing a replication/activation oxidation mechanism.
Introduction
Due to the increasingly stringent emission regulations (The European Parliament and the Council of the European Union 2007), the development of Gasoline (GDI) and Diesel Direct Injection (DDI) engines aims at the reduction of particulate matter (PM) emissions by application of Gasoline (GPF) or Diesel Particulate Filters (DPF). Particulate filters (PF) trap soot particles present in the exhaust gases resulting in minimized PM emission. To allow for a continuous operation of PF, the captured soot is removed periodically by a regeneration procedure or continuously by oxidation with residual O 2 in the exhaust gas (Fang and Lance 2004). The reaction rates of soot against oxidation by O 2 determine the frequency and efficiency of this kind of PF-regeneration (Fang and Lance 2004;Bhardwaj et al. 2014). The regeneration behavior of PF depends on the reaction rates of soot particles with oxidative reactants at engine exhaust gas temperatures (573 K to approximately 1073 K). Time scales of reactions of soot in oxidation determine the reactivity of soot and the reactivity of soot may be expressed through the reciprocal over-all rate coefficient of the oxidation reaction.
Although a contemporary topic of intensive research, oxidation reaction rates of soot at engine exhaust conditions are still barely predictable. Reaction rates of soot towards oxidation have been widely discussed under aspects ranging from physico-chemical properties via different morphological and nanostructural characteristics to its carbon nanostructure (Stanmore et al. 2001;Mühlbauer et al. 2016;Lu et al. 2012;Lapuerta et al. 2012).
Results reported from various investigations indicate diverse and sometimes conflicting influencing factors. Small soot primary particle sizes correlate with high reactivity (Stanmore et al. 2001;Lu et al. 2012;Lapuerta et al. 2012). Large specific surface area which correlates with small primary particle sizes is found to cause high soot reactivity (Fang and Lance 2004;Aarna and Suuberg 1997). The specific surface area (Fang and Lance 2004) as well as the pore structure within soot particles (Stanmore et al. 2001;Lu et al. 2012) are linked to the diffusive infiltration of oxidant into soot particles affecting also reactivity. Accessible active surface area rather than the overall surface area determines reactivity of soot particles (Aarna and Suuberg 1997;Neeft et al. 1997). Amongst the morphological characteristics of primary soot particles, their size distributions and fractal dimensions are indicators for reactivity. Sediako et al. (2017) observed changes in particle morphology during soot oxidation by real-time environmental TEM and demonstrated a correlation between soot aging and oxidation mode.
In addition the carbon nanostructure within primary soot particles is reported to affect considerably the reactivity towards oxidation. Soot primary particles consist of collocated packets of layered large polycyclic aromatic hydrocarbon molecules of different size with different functional edge-groups and distorted sites that can be assigned graphenelike characteristics (Pawlyta et al. 2015). Huang and Vander Wal (2016) demonstrated the dependence of the nanostructure of soot particles upon partial premixing and the associated changes in the gas-phase chemistry of ethylene-air Bunsen flames. The collocation of layered graphene-like structures measured by their length, separation distance and curvature as well as defects within the crystallite structure (Bhardwaj et al. 2014;Lapuerta et al. 2012;Vander Wal and Tomasek 2003;Pfau et al. 2018) altogether affect the reactivity of soot. Reactivity of soot, therefore, is a consequence of both agglomerate morphology and the nanostructure and chemical composition of primary particles (Pfau et al. 2018). The nanostructure determines the number density of sp 2 and sp 3 -hybridized C-atoms and, therefore, the energy level of C-atoms within these structures accessible for oxidation. The larger the number density of sp 3 -hybridized C-atoms and the less organized the nanostructure of soot, the higher its reactivity and oxidation rate (Vander Wal and Tomasek 2003;Su et al. 2004;Knauer et al. 2009).
High-resolution transmission electron microscopy (HRTEM) provides information about the nanostructural properties such as distribution of length, distance and tortuosity of the graphene-like layers (Su et al. 2004;Knauer et al. 2009;Yehliu et al. 2011a;Palotas et al. 1996;Sadezky et al. 2005;Sharma et al. 2000;Vander Wal et al. 2004a, b). Although HRTEM is limited to ex-situ measurements and particles in the size range below about 10 nm as well as little contrast-forming particles are difficult to detect, it is the preferred method to investigate the carbon nanostructure of soot (Su et al. 2004;Yehliu et al. 2011a, b;Palotas et al. 1996;Sharma et al. 1999Sharma et al. , 2000Vander Wal et al. 2004a, b;Shim et al. 2000;Botero et al. 2016). Ultrafine particles with diameters in the 10 nm range or below and highly amorphous particles are emitted from GDI engines (Czerwinski et al. 2018; Bardi et al. 2019) and would not contribute to the fringe analysis. HRTEM images are analyzed qualitatively, manually, or with the help of computer-based image processing software (Vander Wal et al. 2004a, b;Shim et al. 2000;Sharma et al. 1999;Botero et al. 2016;Yehliu et al. 2011b). Large effort has been applied in recent years to improve quantitative analyses of HRTEM of soot, compare e.g. (Vander Wal et al. 2004a, b;Shim et al. 2000;Sharma et al. 1999;Botero et al. 2016;Yehliu et al. 2011b;Toth et al. 2013Toth et al. , 2015. The objective of this work is to develop substantiated information about the oxidation of soot with molecular oxygen on a nanoscale/molecular level to identify the predominant parameters of soot governing its reactivity. As a remedy against some infirmities known from literature, a HRTEM image analysis method developed to perform a quantitative and reproducible analysis of the carbon nanostructure, has been applied to different soot and carbon black samples. Subsequently, the nanostructure determined by the image processing method is compared for the different soot and carbon black samples. Similar work has been performed by Pfau et al. (2018) comparing the nanostructure af carbon black and soot-inoil from gasoline and diesel engines The results are further interpreted using data obtained for the reactivity of soot and carbon black in oxidation with oxygen derived from temperature programmed oxidation (TPO). Further, UV-visible spectroscopy and Raman Scattering and other diagnostic techniques are used to support the study of the reactivity of soot during oxidation. Soot samples are inspected at different burn-out ratio providing valuable information about the development of reactivity during particle burn-out and some evidence for the development of oxidation models. Results are interpreted with kinetic models and oxidation rate kinetics are derived from TPO and interpreted on a molecular basis from quantum chemistry calculations.
Materials
In this work a vast variety of soot particles has been investigated. Soot samples were generated using a Graphite Spark Discharge Generator (GFG-3000, Palas GmbH, Germany) at a voltage of 2500 V and a discharge frequency of 500 Hz. The argon carrier gas flow was set to 10 nl min −1 (nl: norm liters) This soot sample will be denoted as AGFG. Downstream of the aerosol generation, a nitrogen flow of 10 nl min −1 was used to dilute the aerosol before 1 3 collecting it on filters. The carrier gas argon has been replaced by nitrogen at the same flow rate to generate a further soot sample (NGFG). Additional soot samples were prepared by collecting soot on quartz fiber filters downstream of a low pressure (200 mbar) flat premixed laminar acetylene/oxygen flame (equivalence ratio = 2.7 ) (ACFL) and flat premixed iso-octane/oxygen/argon flames ( = 2.3 ) at pressures of 1 bar, 2 bar and 3 bar (i-OCT1, i-OCT2, i-OCT3). Flame soot samples were Soxhlet-extracted to remove adsorbed polycyclic aromatic hydrocarbons (PAH). In addition, soot samples from a turbocharged 4 cylinder research GDI engine (2.0 liters) collected close to the manifold at medium (A22) and high (A33) engine load have been included. Sample A22_1 is obtained from the GDI engine operated with a modified injection pressure (87.5 bar compared to 100 bar for sample A22). Engine conditions are given in detail in Koch et al. (2020). The ICE soot was complemented by samples from a commercial Diesel Direct Injection engine at injection pressures of 2200 bar, 1600 bar and 1200 bar (C50_2200, C50_1600, C50_1200). Sampling procedure and engine description is delineated in Lindner et al. (2014). For comparison commercial carbon black samples were investigated. The carbon blacks examined are Printex Ⓡ 25 (P25), Printex Ⓡ 45 (P45), Printex Ⓡ 85 (P85) and Printex Ⓡ 90 (P90) (Orion Engineered Carbons, Luxembourg) manufactured by the furnace carbon black process and acetylene carbon black Alfa Aesar™(ThermoFischer Scientific Inc., USA), 100% compressed (AC100). The carbon black samples provide materials with a wide range of mean primary particle sizes, specific surface areas and reactivity, see Table 1. As commercial products they are easily available and, therefore, are ideally for the investigation presented here. Table 1 summarizes some properties of the investigated soot and carbon black samples: The BET specific surface area (BET), the count median diameter (CMD) of the primary 1 3 particle size distribution approximated by a log-normal distribution, the C/H atomic ratio and the oxygen mole fraction X O 2 determined by elemental analysis and the temperature of maximum reaction rate ( T max ) during TPO. T max is widely used to indicate the reactivity towards oxidation, where low temperatures are linked to high reactivity and vice versa, see following sections. As can be noticed from Table 1, the properties of the investigated soot and carbon black samples are spread over a wide range in magnitude. The listed soot samples were chosen deliberately to harden or exclude correlations of the reactivity against oxidation with different morphological, chemical and nanostructural properties. Some correlations between properties, e.g. reactivity indicated by T max , the temperature of maximum oxidation rate during TPO, with specific surface area and mean primary particle size, differ from those given in the literature, see GDI and carbon black soot samples. Equally well, e.g. the relationship between specific surface area and mean primary particle size differs from the expected one (the higher the specific surface area the lower the mean primary particle size), see the Printex Ⓡ samples. On the other hand, some properties between different samples are similar, so that conflicting results can be explained by widening the spectrum of influencing factors and identifying those with highest weight.
The various applied analytical methods for the investigation of the soot samples, see Sect. 2.2, require varying sample amounts. Due to the different origin of the soot (commercial carbon black, soot from ICE, soot from flames), only varying quantities were available for the experiments. Therefore, not every method could be applied fully to the entity of soot samples, which particularly applies to the soot samples from ICE. However, due to the similarity with respect to measured morphological and nanostructural properties, these samples were withheld in the discussion for comparison.
The emission of soot from ICE depends significantly on operation conditions such as injection pressure, injection timing, multiple injection patterns, load etc., resulting in soot which is not alike soot when looking at some bulk properties. However, soot from ICE is similar to flame generated soot or carbon black with respect to morphological and nanostructural properties and, therefore, is alike these kinds of soot with respect to reactivity against oxidation. Vice versa, alike soot with respect to some bulk properties exhibits diverse nanostructure and reactivity. For the above reasons, the wide range of soot samples including soot from ICE was considered in the tests, though the application of the full diagnostic and analytic methods to all these samples was limited.
Applied Methods
The elemental analysis of the elements carbon, nitrogen and hydrogen was performed using a Vario Micro Cube elemental analyzer (Elementar Analysensysteme GmbH, Germany). The measurement system is equipped with thermal conductivity (TCD) and infrared (IR) detectors. The soot sample mass was 2.5 mg for each measurement.
Specific surface areas were measured using a BELSORP-mini (BEL Japan Inc., Japan) volumetric adsorption measurement instrument with nitrogen physisorption at 77 K. Prior to the measurements, the apparatus was calibrated using internal standards. Soot samples were outgassed in vacuum at 378.15 K before the measurements. The resulting isotherms were analyzed conforming to the latest IUPAC recommendations with respect to the BET surfaces.
UV-visible spectroscopy was performed with soot particle suspensions in N-methyl-pyrrolidone (NMP). Apicella et al. (2004) mention, that NMP is a suitable solvent to achieve stable 1 3 suspensions of carbon-rich, solid materials. Sample preparation was carried out by mixing sample and solvent, followed by dispersing with an ultrasonic homogenizer (Bandelin, Sonoplus HD3200, Germany) for 10 minutes. The spectral response of the solvent NMP in the UV limits the visualization of the spectra to a region of 280-800 nm. UV-visible spectra were measured in the absorbance mode on a Chirascan (Applied Photophysics, UK) spectrometer and the Cary 300 UV-visible spectrometer (Agilent, USA) using 1 cm quartz cuvettes. Each sample was measured four times, while the cuvette was refilled before each measurement.
Oxidation Rates
The oxidation rates of the soot samples were measured through temperature programmed oxidation (TPO) employing thermogravimetric analysis (TGA) (Koch et al. 2020;Hagen et al. 2020). Dynamic, non-isothermal measurements were performed with a TG 209 F1 Libra thermo balance (Netzsch Gerätebau GmbH, Germany) at a heating rate of 5 K min −1 . The soot samples with a sample mass of 2 ± 0.2 mg were oxidized under an atmosphere consisting of 5 %vol O 2 and 95 %vol N 2 at atmospheric pressure. The thermo balance was temperature-calibrated with reference to the melting points of In, Sn, Bi, Zn, Al and Ag. In addition to virgin soot, soot samples at different burn-out ratios (mostly 0%, 20%, 60%, 80%, 90%) have been investigated.
Complementary to the dynamic TPO-experiments, soot samples were also oxidized stepwise to mass losses of 20%, 60%, 80% and 90%. During these experiments soot samples were heated up in the thermobalance under inert conditions ( N 2 ) applying a heating rate of 200 K min −1 . After attaining the reaction temperature of 1073 K, soot samples were oxidized with a mixture of 5% by volume of O 2 in N 2 under isothermal conditions until the desired mass loss and cooled down under inert conditions. After each oxidation step part of the sample was examined with regard to the reactivity of the primary particles by TPO and by HRTEM analysis and the remaining sample was used for the subsequent oxidation step.
As far as reaction mechanisms based on elementary reactions are not available, the oxidation of soot may be described with the help of global over-all kinetic expressions. To deduce kinetic parameters for the oxidation of soot with excess of oxygen from the TPO profiles, a reacting-volume model of the form is applied, where m soot means the actual soot mass, p O 2 is the oxygen pressure and N act the number density of sites accessible for oxidation with the reaction order m and n, respectively. k (1) ox is the corresponding reaction rate coefficient. For O 2 in excess, p O 2 can be regarded being constant and according to the reacting-volume model N act can be approximated assuming N act ∝ m soot . This results in Introducing = m soot ∕m 0 soot gives When performing TPO experiments with a constant heating rate = dT dt the rate equation can be rewritten as (3) The rate coefficient k (4) ox contains the heating rate and in case of n ≠ 1 the initial mass m 0 soot . Therefore, all TPO experiments are performed with a constant initial mass of 2 mg and a constant heating rate of 5 K min −1 to compare rate coefficients. The temperature dependency of the rate coefficient is expressed with the help of a simple Arrhenius approach k (4) R⋅T where E a is the apparent activation energy of the over-all reaction. The reactivity of soot may be expressed through the reciprocal rate coefficient k (4) ox .
The three-parameter rate coefficient can be obtained by numerical integration of the model equation Eq. 4 and fitting it to measured TPO profiles with the help of non-linear regression, e.g. via the method of Levenberg-Marquardt. As alternative the easily to measure temperature T max at the maximum change d dT can be used to describe reactivity. Figure 1 exhibits TPO-profiles computed for k (4) 0 ox = 4.0 ⋅ 10 6 K −1 and E a = 100 kJ mol −1 . The left branch of the TPO-profiles until the maximum is determined by the temperature dependency of the oxidation rate while the right branch of the rate is limited by the depletion of the soot mass. T max , the temperature of maximum oxidation rate during TPO (indicated by the broken line), is correlated approximately linearly to the apparent activation energy meaning that low T max indicates high reactivity and vice versa (e.g. T max = 6.0 ⋅ E a + 30 for k (4) ox = 4.0 ⋅ 10 6 K −1 , n = 1 and E a in kJ ⋅ mol −1 ).
Raman Microscopy
The Raman spectra of soot samples were obtained using a Renishaw inVia Raman Microscope with Fiber Optic Probe (FOP) equipped with a Nd:YAG laser (532 nm, laser power 150 mW). Spectra were recorded from 50 to 2000 cm −1 . For the detection a grid with 1800 mm −1 spacing and a CCD detector with an objective of 100fold magnification was employed.
Raman spectra of soot can be characterized by a 5-peaks structure Ferrari and Robertson 2000) containing a G-peak at about 1580 cm −1 (graphite band, G),
Fig. 1
Calculated TPO-profiles according to Eq. 4 at different reaction order n using k (4) 0 ox = 4.0 ⋅ 10 6 K −1 and E a = 100 kJ mol −1 see Fig. 2. This band is attributed to an ideal graphitic lattice and caused by the relative motion of sp 2 carbon atoms ( E 2g -symmetry). The D-peak at about 1355 cm −1 is a breathing mode of the carbons in six-membered rings ( A 1g -symmetry). This mode becomes active only in presence of disorder (defect or disordered bands, D1). A peak at about 1620 cm −1 refers to disordered graphitic structures (D2, E 2g -symmetry), whereas the peak at about 1500 cm −1 (D3) is attributed to amorphous carbon. Finally the peak at about 1200 cm −1 (D4, A 1g -symmetry) is attributed to sp 2 and sp 3 carbon atoms not necessarily in six-membered rings. To reproduce the spectra a 5-band-fitting procedure according to Sadezky et al. (2005), Ferrari and Robertson (2000) has been applied, which is demonstrated in Fig. 2. The figure displays the measured spectrum (symbols) and the intensity of the D1-to D4 and G-peaks as well as the fitted spectrum (gray line). This allows the estimation of the relative intensities of D-and G-bands providing qualitative information about the abundance of graphitic ordered structures and disordered regions and the content of amorphous carbon. High values of the intensity ratio I D1 ∕I G indicate predominance of low ordered graphene-like structures with small extension whereas low values suggest well ordered, graphitic graphene-like structures of large extension.
HRTEM Image Preparation and Processing
In preparation of HRTEM recordings, soot samples were mixed with ultra-pure water, stirred by ultrasound and dispersed onto a carbon-coated TEM copper grid. HRTEM images were acquired using a Philips CM200 transmission electron microscope (Ther-moFischer Scientific Inc., USA), operated at 200 kV and a magnification of 380.000 resulting in a spatial resolution of 0.0283 nm px −1 . Size distributions of primary soot particles have been determined from HRTEM images of soot particle aggregates applying a MATLAB-procedure, the single steps of which are illustrated in Fig. 3. After reading the TEM images, appropriate scaling and selection of a 1 3 region of interest a Gaussian low-pass filter is applied, to remove background noise. These steps are followed by binarization as well as edge enhancement and a Hough transformation to detect circular objects.
The Hough transformation leads to the detection of an excessive number of circular objects not all being primary particles. Therefore, different operations are applied to extract circles representing actual particles with their accurate diameter. The outlines of detected circles are compared to edge structures extracted from the original image. Overlapping circles are then ranked and deleted according to their congruity with those edge structures, leaving only well-fitting circles. At last, size distributions of the detected primary soot particles and fractal dimensions D f are calculated. The fractal dimension according to the minimum bounding rectange (MBR) method is given by with the circular area A pp , the aggregate area A a and the width W and length L of the aggregate. The exponent is taken from Köylü et al. (1995), = 1.09 . Size distributions are based on the evaluation of about five exposures and 100-500 primary particles each. The procedure has been tested with synthetically generated size distributions before application.
The essential steps of HRTEM image processing to evaluate the primary particle nanostructures are filtering in the Fourier space, binarization, skeletonizing elements, post-processing of the skeletons and analysis of fringe length, tortuosity and separation distance of the fringes (Shim et al. 2000;Sharma et al. 1999;Yehliu et al. 2011b). For binarization the choice of a suitable, global threshold (TH) still represents an unsatisfied challenge. While the evaluation of fringe length and tortuosity is well established, the calculation of the separation distance still contains uncertainties and difficulties. (5)
Fig. 3
Procedure for evaluation of primary particle size distributions from HRTM images and example from i-OCT3 soot. Measured size distribution (right diagram) is approximated by a log-normal distribution with CMD = 24 nm and g = 1.2 . HRTEM of an exemplary aggregate is also given 1 3 The single steps of the procedure used in this work are depicted in Fig. 4, see also Koch et al. (2020). The single computing steps are given in the left part of the figure and illustrated with the help of corresponding HRTEM images (right part). The imported 16-bit HRTEM images are saved as gray scale matrices. The images are inverted, top-hat transformed and their spatial resolution is calculated (yellow framed rectangles). In order to reduce the background noise resulting from optical distortion of the HRTEM images, the use of a Gaussian low-pass filter is an established method (Botero et al. 2016;Gonzalez and Woods 2008). In addition to removing small background structures, larger structures are reduced in size (Sharma et al. 1999). This leads to a loss of carbon fringe layers or a separation of structures. To counteract this effect, an image comparison is implemented in the algorithm. By comparing filtered and unfiltered images only structures present in the filtered image are kept but then replaced by the original unfiltered structure (green framed rectangles). As a result, background noise is significantly reduced while fringes maintain shape and size.
Binarization separates pixels into two different categories due to their intensity. The use of top-hat transformation is an established method to prepare an image for binarization. Differences in exposure within an image make a global threshold (TH) unfit for this task. According to a TH, the intensity I = {0, 1} ( I = 1 foreground, I = 0 background) is assigned to every pixel during binarization. The determination of a suitable TH is considered as crucial for image processing (Yehliu et al. 2011b;Gonzalez and Woods 2008;Galvez et al. 2002;Serra 1989). An ideal TH is achieved when pixel intensities show a bimodal distribution. Then, the TH value is equal to the local minimum in between the two maxima of the distribution. In HRTEM images, these modes would represent graphene-like structures and background, respectively. The intensity distributions of HRTEM images, however, are unimodal.
Applying the well-known Otsu threshold method (Jähne 2002) to the unimodal intensity distributions resulting from HRTEM images of soot primary particles, no suitable TH could be found. Therefore, a method was developed in this study, using a TH value that causes a minimal alteration of the graphene-like structures when changing its value. For this, the number of pixels per object has been calculated versus the value of the TH. The resulting histograms were fitted by a two parameter -function. The obtained functions exhibit regions in the parameter space (shape parameter and scale parameter) where the number of pixels changes only little with changing TH value. This transition region characterizes an ideal TH value for binarization where fringes are optimal separated from background pixels. As part of this development 215 HRTEM images have been analyzed. Only 19 of them ( ≈ 8%) did not result in reasonable TH values. Further testing with the use of a different transmission electron microscope (FEI TITAN 3 (80-300) (ThermoFischer Scientific Inc., USA), at 300 kV) also confirmed the method.
Skeletonizing the elements reduces all objects to lines with a width of one pixel. This prepares the image for calculating length L, Euclidian end point distance e and, hence, tortuosity T = L∕e of fringes. In this study, the objects are skeletonized by a Zhang-Suen algorithm (Zhang and Suen 1984). Subsequently, a number is assigned to every object. Branches within the skeletonized structures, which are joined by branch point (BPs) originate from applying the skeletonization algorithm. The BP analysis aims at creating a continuous main structure by deleting branches not belonging to this main graphene-like structure. For this, Yehliu et al. (2011b) use a morphological opening and closing method. This method leads to an incomplete removal of branches of the carbon nanostructures investigated in this study. Therefore, each BP is analyzed individually using a similar procedure as introduced by Shim et al. (2000) as well as Sharma et al. (1999), (blue framed rectangles).
The fringe length results from counting the pixels of a structure while pixels are assigned different lengths according to their connection to neighboring pixels. A straight link between two pixels results in a length of L = 1 px while a diagonal connection is equal to the length of L = √ 2 px . Tortuosity T describes the ratio of fringe length and Euclidian distance and, hence, the curvature of fringes. The separation distance D, on the other hand, is used to indicate the short-range order of fringes, see Fig. 4, (gray framed rectangles). The algorithm developed in this study allows an automated determination of both structural parameters. Prior to analyzing the nanostructure of soot and carbon blacks the developed image processing procedure programmed also in MATLAB has been validated by analyzing manually created, characteristic reference structures. This led to a maximum deviation of 3% concerning the length of the detected structures. For evaluation of the nanostructure 20 to 50 soot primary particles from different exposures and up to 5000 fringes were analyzed.
The length L, spacing D and tortuosity T of the fringes reflect a relationship to the corresponding properties of the graphene-like layers in the primary soot particles, so that "fringe" and "graphene-like layer" are used synonymously in the following. It should also be noted, that ultrafine particles with sizes in the 10 nm range and below and particularly amorphous particles emitted from GDI engines (Czerwinski et al. 2018; Bardi et al. 2019) are hardly accessible for this kind of analysis.
Quantum Chemistry Calculations
For interpretation of the oxidation rates obtained from TPO-measurements quantumchemistry estimations have been performed. Applying these methods, calculations have to be restricted to comparatively small molecules to obtain reliable results. Therefore, model molecules which represent carbon structures in soot primary particles are used for this kind of calculations, see e.g. Sendt and Haynes (2011), Edwards et al. (2013Edwards et al. ( , 2014. Soot primary particles are built up of layered graphene-like structures consisting of large polycyclic aromatic hydrocarbons, partially equipped with functional groups and aliphatic side chains. Particularly in the state of incipient soot, smaller structures are linked via aliphatic bridges, see e.g. D'Anna (2009). The primary attack of O 2 at temperatures of about 900 K occurs at aliphatic side chains or aliphatic bridges rather than at aromatic C-H-sites. According to Mehl et al. (2011), Zhang and McKinnon (1995), the respective rate coefficients differ by more than one order of magnitude. It is then likely, that aliphatic side chains and aliphatic bridges are stripped from the polycyclic aromatic structures first and the much slower activation of the remaining polycyclic aromatic structures constitutes the rate limiting step. Therefore, and because the amount of carbon fixed in these side chains and aliphatic bridges is low compared to that contained in the graphene-like structures, the polycyclic aromatic hydrocarbon pyrene has been used as a model molecule for graphene-like layers in soot primary particles in this work. The estimated rate coefficients presented in Sect. 3.4, therefore, are limited to activation/degradation reactions of this polycyclic structure. The hydrogen content of the investigated soot samples, which is even considerable for the soot sample with the lowest reactivity (AC100, see Table 1), is less compared to pyrene. However, considering the large extension of the polycyclic aromatic structures in the soot primary particles, the decrease of the hydrogen content with increasing size of the structures and the focus on the activation/degradation reactions of these structures, the choice of pyrene as a model molecule is justified.
The reactions of pyrene with molecular oxygen are investigated for developing kinetic models and oxidation rate kinetics. To determine the molecular properties of reactants, transition states and products of the different species occurring in the pyrene/O 2 system, the Gaussian 03/09 (Frisch et al. 2016) and the Gaussian-4 (G4) (Curtiss et al. 2007) program suites have been employed. The hybrid density functional method DFT (B3LYP), which combines the three parameter Becke exchange functional B3 with the Lee-Yang-Parr nonlocal correlation functional (LYP), with a double polarized set, 6-311G(d,p), is used to optimize geometries (Becke 1993;Lee et al. 1988;Montgomery et al. 1994). The use of DFT (B3LYP) is affordable and permits handling of large molecules at low computational costs. This method, when combined with isodesmic reactions, delivers good accuracy for thermodynamic data. B3LYP/6-311G(d,p) is chosen because it is reported to yield accurate geometry and reasonable energies and vibration frequencies at reasonable computational expense (Durant 1996;Andino et al. 1996). B3LYP has been validated previously by comparing results with higher level methods and its application for large molecules and radicals has produced reasonable results (Sebbar et al. 2008(Sebbar et al. , 2015(Sebbar et al. , 2011. Only transition state structures differ sometimes from other methods due to the differences in structures calculated by B3LYP. The reaction rate coefficients for the primary reactions of pyrene with oxygen were calculated and compared with data from literature (Manion et al. 2015) when available.
Kinetic parameters are determined as a function of temperature from the calculated thermochemical parameters using chemical activation analysis. Kinetic parameters are obtained from canonical transition state theory (TST) calculations.
TPO-Results
TPO-profiles for a subset of soot samples from Table 1 are given in Fig. 5. The figure contains the experimental results (symbols) and the calculated profiles derived from Eq. 4 and the fitting procedure introduced in Sect. 2.2.1 (solid lines) using the kinetic parameters given in Table 2. In Table 2 X i means the mass fraction of the different soot types in the samples (NGFG, AGFG). The calculated values and the given digits represent the 95% confidence interval from regression. As can be extracted from Fig. 5 and Table 2, oxidation rates of the soot and carbon black samples are spread over a wide range of T max . The apparent activation energies cover a range from ≈ 95 to ≈ 175 kJ mol −1 except for a low temperature peak appearing at about 570 K for NGFG and AGFG. Similar values for the over-all activation energies ( ≈ 150 kJ mol −1 ) of the ICE soot samples (C50_1200, C50_1600, C50_2200, A22) are reported in Zöllner et al. (2017). The A22 soot sample exhibits a low temperature peak at about 430 K (also present in the TPO of ACFL) which can be identified as evolution of volatiles by oxidizing the samples after heating them up under inert atmosphere up to about 800 K. The spark discharge generated soot samples show three peaks in the TPO profiles at about 570 K, 790 K and 910 K. The estimation of kinetic parameters works best when treating these samples as consisting of three independent kinds of soot, denoted as AGFG (1) , AGFG (2) and AGFG (3) and same for NGFG ). The A22 sample suggestively also exhibits the peak at 570 K.
Treating soot samples such as AGFG or NGFG as consisting of three independent soot types raises the hypothesis that different reactive parts of primary particles in the soot aggregates are oxidized independently and the oxidation rate is a linear Table 1, experimental results (symbols), calculated profiles (solid lines) derived from Eq. 4 and the fitting procedure introduced in Sect. 2.2.1 using the kinetic parameters given in Table 2 1 3 combination of the oxidation rates of the different soot types. This can be verified by the oxidation of soot sample A22_1 illustrated in Fig. 6 which contains the experimental oxidation rates of the sample (red symbols). The experimental TPO profile contains two major peaks at about 790 K and 890 K and includes features also exhibited by those of A22 and the most prominent peak of AGFG. The TPO profile calculated by a linear combination of these two TPO profiles (0.74*AGFG (green line) + 0.33*A22 (blue line)) is indicated by the red solid line. The low temperature peak at about 450 K which represents the evolution of volatiles is excluded in the combination.
From the TPO experiments and the properties of the soot samples given in Table 1 no clear basic causes for the differences in reactivity are obvious. Comparatively small Table 2 Kinetic parameters according to Eq. 4 estimated using least squares minimization for soot samples from Table 1 Sample 6 Experimental TPO profile of the soot sample A22_1 (red symbols) and calculated profile (red solid line) using a linear combination of the TPO profiles of A22 and AGFG soot primary particle sizes are not well correlated with high reactivity in all samples, compare e.g. P45 with a CMD of 31 nm with A22 with a CMD of 28 nm and a difference in T max of about 70 K. Similarly, large specific surface areas which correlate with small primary particle sizes cause different reactivity, compare e.g. NGFG with a BET surface of about 425 m 2 g −1 and AGFG with about 680 m 2 g −1 and a difference in T max for the most prominent TPO peak of about 160 K. Also the content of volatiles present e.g. in the samples A22, A22_1 and ACFL obviously does not lead to comparable reactivity. If the oxygen content in the soot samples indicates the presence of functional groups at the surface of soot particles, also no clear correlation is found between reactivity and functional groups. Soot samples with alike bulk properties, e.g. P25, i-OCT1, A22 with a CMD of ≈ 30 nm or NGFG, C50_1200, C50_1600 with a BET of about 420 m 2 g −2 are unalike with respect to reactivity (widely varying T max , temperature of maximum oxidation rate during TPO). Vice versa, soot samples with alike reactivity, e.g. P90, i-OCT3, P85, C50_1200 with T max ≈ 940 K are unalike with respect to bulk properties such as CMD or BET. Therefore, the basic causes for the dependency of reactivity on soot properties as stressed e.g. in Fang and Lance (2004) (1997), Neeft et al. (1997) have to be extended to morphological and nanostructural aspects of the soot primary particles.
Morphology and Nanostructures of Soot Primary Particles
Primary particle size distributions of some soot samples are given in Fig. 7. The size distributions all resemble logarithmic normal size distributions which are also indicated in the diagrams (dashed lines). The mean particle sizes differ for the single samples, whereas the variances and fractal dimensions are similar. Similar size distributions are obtained for other soot samples listed in Table 1. As discussed in the previous section, no clear correlation between mean particle size and reactivity expressed via T max is observable from the size distributions. In contrast to this, the distribution of fringe lengths and separation distances in the primary particles exhibit a clear correlation to T max . The lower T max (the higher the reactivity), the smaller the fringe lengths and the wider the distribution of the fringe separation distance, see Figs. 8 and 9. Small fringe lengths are correlated to wide distributions of the separation distance and vice versa. For the most reactive soot sample (AGFG) fringe lengths range up to 3 nm and the separation distances range up to 0.6 nm. The fringe length distribution for the least reactive soot sample (AC100) ranges up to higher than 7 nm with small separation distances (up to 0.45 nm) and a much narrower distribution of those distances. For comparison: The extension of a C 6 -unit in graphite is 0.380 nm and the separation distance of the graphene layers in graphite amounts to 0.335 nm.
The shape of the determined fringe length frequency distributions corresponds well with findings of other studies (Pfau et al. 2018;Yehliu et al. 2011a, b;Palotas et al. 1996;Rinkenburger et al. 2017). The fringe length distributions appear as approximately resembling Poisson-distributions or exponential distributions. These distributions are one-parameter distributions and, therefore, the mean fringe length seems to be sufficient for characterizing the distributions. The mean fringe lengths quantified in this study [ 0.45 nm < L f < 0.7 nm , compare e.g. ≈ 0.9 nm (Pfau et al. 2018)] are slightly smaller than those calculated in the literature. This is most likely due to the comparatively high frequency of short structures ( < 0.5 nm ). Particularly short structures are reconstructed in the algorithm due to the newly introduced image comparing procedure. Another reason for this could be the manual selection of regions of interest (ROI) favoring images with high contrast (Yehliu et al. 2011a, b;Palotas et al. 1996;Wan et al. 2018). The image processing algorithm used here analyzes full frame HRTEM images ( 120 nm × 120 nm ) including also short structures with comparably less contrast.
The slope of the fringe length distribution with a logarithmic scale on the frequency axis is larger for the highly reactive sample (AGFG) than that of the less reactive sample (ACFL). This indicates a broader distribution for ACFL, though in the evaluated image fringes with large lengths have not been detected. Missing of large fringe lengths could be by chance due to the selection of the region of interest in the image. A broader distribution of the fringe length corresponds to a narrower distribution of the separation distances, which has been measured for this image.
The resulting structure-reactivity correlation is depicted in Fig. 10, where the mean fringe length of the primary particles is plotted versus T max . The error bars in the plot are due to the evaluation of up to 5000 structures from primary particles in different HRTEM images and represent the variation of the evaluated mean lengths from different primary particles. The correlation is approximated by a linear fit in Fig. 10. The correlation depicted in Fig. 10 holds for the different types of soot obtained from different sources, e.g. flame soot, carbon blacks, engine soot and spark discharge generated soot.
UV-visible absorption spectra affirm qualitatively the structure-reactivity correlation, see Fig. 11. The shape of the spectra is similar to that resulting from soot analyzed Table 1 calculated with the procedure outlined in Sect. 2.2.3; dashed curves correspond to fitted log-normal distributions in Apicella et al. (2004). All soot samples exhibit highest absorption at 290 nm, which decreases towards larger wavelengths. The decay is lowest for the soot sample with lowest reactivity (AC100, T max = 1063 K ). For P25 ( T max = 1010 K ) after an initially steep decrease of the normalized absorption, a slightly steeper decrease compared with AC100 at larger wavelengths is observed. For the highly reactive soot samples AGFG, ACFL and i-OCT3 with T max around 911 K to 944 K after an initially steep decrease, the normalized absorbance decays similarly at larger wavelengths, however, somehow steeper than for the low reactive soot samples. For the latter samples different absorbance of molecular feature at 450-500 nm, least pronounced for ACFL, and slight shift of contained bands occurs.
The fringe length L is a measure for the spatial extension of graphene-like layers in the primary particles. Along with an increase of the extension of a graphene-like layer, the contribution of planar sp 2 -bonded carbon atoms and thereby -electrons rises. Large contributions of -electrons corresponding to large extension of graphene-like layers cause a redshift of absorption and only a smooth decline of the absorption functions with increasing wavelength (Apicella et al. 2004). Small contributions of -electrons cause a steep decrease. Relative to the total number of electrons, large mean fringe lengths (AC100) provide the largest number density of -electrons and only smooth decline of the absorption function whereas small mean fringe lengths (AGFG) provide the smallest number density of -electrons and a steep decrease of the absorption function. This is reflected plotting the Table 1 evaluated with the procedure outlined in Sect. 2.2.3
Fig. 10
Correlation of mean fringe length L f of the soot primary particles versus T max for the soot samples from Table 1 ratio of the absorption function at different wavelengths versus T max , the temperature of maximum oxidation rate during TPO, or the fringe lengths, see Fig. 12. Again the resulting correlations are approximately linear. The error bars in the plot are due to the evaluation of up to four spectra from different portions of the same soot sample.
As exemplified by Fig. 6 the oxidation rates of soot samples with multiple T max are reproduced by a linear combination of the oxidation rates of different soot types with the respective T max (or T max in the respective range). The interpretation of this behavior is that the single soot types are oxidized independently. If the extension of the fringe layers is the essential parameter describing the reactivity, the linear combination should also apply to Fig. 11 Normalized UV-vis spectra of soot samples from Table 1 Fig . 12 Ratio R of the absorption function at 290 nm to that at 500 nm versus T max and mean fringe length, resp., of soot samples from Table 1 the distributions of the fringe length. This is demonstrated in Fig. 13 for the soot sample A22_1, the TPO profile of which is given in Fig. 6. The figure contains a HRTEM image including some primary particles of that sample (left) and the distribution of the fringe length evaluated from that image (upper right). The fringe length distribution of an artificial mixture with 0.74*AGFG + 0.33*A22 composed by a linear combination of the distributions of AGFG and A22 is given in the lower right part of Fig. 13. The two distributions show a reasonable correspondence. Due to the shape of the fringe length distributions no polydisperse distribution as in the TPO profiles for the linear combination is expected.
This behavior can be verified also for other mixtures as given in Fig.14. The figure contains the experimental TPO profile (red symbols) of a prepared 1:1 mixture of the P25 and i-OCT3 samples and a TPO profile composed from the TPO profiles of these soot samples (red solid line). The upper right part of the figure displays an exemplary HRTEM image including some primary particles of that mixture. Finally, the fringe length distribution evaluated from up to 5000 fringes from mixed primary particles in different HRTEM images of that mixture (lower right) and that calculated from the linear combination of the distributions of P25 and i-OCT3 (lower left) is displayed. In difference to Fig. 13 the figure contains the HRTEM and the fringe length distributions of the prepared mixture from the respective experiments. Again a good agreement between the two distributions is observed.
An intermediate conclusion-as argued by e.g. Bhardwaj et al. (2014), Lapuerta et al. (2012), Vander Wal andTomasek (2003)-is that the nanostructure of the soot primary particles essentially determines their reactivity against oxidation. A simple reactivity-nanostructure relation approximately linearly correlates the mean length of the fringes in the primary particles with T max . Furthermore, oxidation rates of soot samples with differently reactive components can be linearly combined to the apparent oxidation rate.
Stepwise Oxidation of Soot
If the extension of the fringe layers is the essential property that determines reactivity, the reactivity of soot should increase with deceasing length of the fringes. Decreasing length of the fringes is expected during oxidation of soot particles and, therefore, the reactivity should increase during oxidation. To test this hypothesis, different soot samples were oxidized as described in Sect. 2.2.1 repeatedly with oxygen under isothermal conditions at 1073 K. After each oxidation stage with mass decreases of 20%, 60%, 80% and 90% soot primary particles were examined with regard to their reactivity by TPO and HRTEM analysis. Figure 15 shows the TPO profiles of two soot samples from these test series (i-OCT3 and P25). The figure demonstrates the decrease in T max , i.e. increase in reactivity, with increasing mass loss due to oxidation, which is more pronounced for the more reactive carbon blacks (i-OCT3: 944 K to 860 K, P25: 1010 K to 964 K, AC100: 1063 K to 1052 K ) than for the less reactive ones. The temperature at maximum oxidation rate, T max , is dependent on the kinetics of oxidation and connected to the apparent activation energy of the oxidation, compare Sect. 2.2.1. A change of T max by 10 K results in a change of the activation energy of about 1.5 kJ mol −1 . The decrease of T max with proceeding burn-out is steeper at burn-out ratios larger than 60% compared with lower burn-out ratios. The analysis of the nanostructure of the primary particles from these two soot samples confirms this development, as given in Figs. 16 and 17. The decrease in T max from 944 K of the untreated sample i-OCT3 to approx. 860 K during oxidation up to a mass decrease of 90% is associated with a significant decrease in the expansion of the graphene-like layers in the primary particles (Fig. 16, left part). It is interesting that the size distribution of the primary particles hardly changes during the stepwise oxidation (Fig. 16, right part), suggesting an internal burning mode rather than a shrinking core mode. The same trend can be seen for the sample P25 in Fig. 17 and ACFL and AC100 (not depicted here). At the prevailing large time scales for the oxidation of soot diffusion rates of oxygen into the structures of the primary particles well competes with chemical reaction rates. Similar behavior has also been observed in the oxidation of soot catalyzed with Fe 2 O 3 under similar conditions (Reichert et al. 2010) and in flames (Schäfer et al. 1995). Fringe length distribution of primary particles from i-OCT3 and primary particle size distribution at stepwise oxidation Similar interesting features can be extracted from Raman scattering results given in Fig. 18. The figure displays stacked Raman spectra of primary particles from i-OCT3 at stepwise oxidation (left) and the evaluation of I D1 ∕I G according to the procedure described in Sect. 2.2.2 (right). 1 The graphite band (G band) at about 1580 cm −1 is attributed to an ideal graphitic lattice and indicates highly ordered structures. The D1-peak at about 1355 cm −1 becomes active only in presence of disordered graphene-like structures. The estimation of the relative intensities of D1-and G-bands using the 5 band fitting procedure, therefore, provides qualitative information about the abundance of graphitic ordered structures and disordered regions in the primary particles. High values of I D1 ∕I G indicate predominance of low ordered graphene-like structures with small extension whereas low values suggest well ordered, graphitic graphene-like structures of large extension.
The spectra of i-OCT3 at different burn-out ratios in Fig. 18 give a qualitative picture of the evolution of the intensity ratio, and the quantitative evaluation is displayed in the right part of Fig. 18. The intensity ratio I D1 ∕I G is initially high for virgin soot i-OCT3 and decreases to about less than half and decreases further slightly with progressing oxidation until a burn-out ratio of 80%. At higher burn-out ratio it increases again, indicating that the Fig. 17 Fringe length distribution of primary particles from P25 and primary particle size distribution at stepwise oxidation relative abundance of ordered regions within the primary particles increases on account of the disordered amorphous regions. At high burn-out ratios also the highly ordered structures decompose. This behavior suggests that the very reactive, disordered structures are first oxidized, while the less reactive structures, whose reactivity nevertheless increases during the oxidation, see Fig. 15, are later oxidized and degraded finally to smaller structures. Similar trends can be found for e.g. AGFG and NGFG, where the different reactivity against oxidation can be traced back to the relative abundance of ordered and disordered graphene layer structures and is reflected by the Raman spectra of these soot types ).
Mechanistic Interpretation
As discussed in Sect. 2.2.4, in this work as well as in similar work from literature, see e.g. Sendt and Haynes (2011), Edwards et al. (2013, Frenklach and Mebel (2020), the mechanistic interpretation of the oxidation of soot is based on model molecules. These are supposed to depict the graphene-like structures in the primary soot particles omitting interactions between the single layers. The employed model molecules-in this work pyrene-are approximations with regard to the energy levels of the individual carbon atoms in the graphene-like structures. However, they facilitate the identification of essential reaction pathways for reactions of the graphene-like layers with O 2 and for estimating and comparing reaction rate coefficients. These limitations restrict the focus of the following discussion to clearly identifiable trends.
The experimental results discussed in the previous sections reveal that the property determining reactivity is predominantly the fringe length of graphene layers. The larger the fringe length the lower the reactivity. Soot types with different reactivity are characterized by different fringe sizes. Different soot types combined in a soot primary particle contain regions of different fringe length which are oxidized independently leading to multiple peaks in the TPO traces with different T max , see Sect. 3.2. The independent oxidation of graphene layers of different reactivity suggests, that after primary activation of a graphene-like structure the further degradation proceeds at higher reaction rate than the activation of additional layers. Another experimental result is the increasing reactivity of Fig. 18 Raman spectra of primary particles from i-OCT3 (left) at stepwise oxidation and evaluation of I D1 ∕I G according to the procedure described in Sect. 2.2.2 (right) the graphene layers with proceeding oxidation, see Sect. 3.3. The mechanistic interpretation of the experiments given in the following is intended to reflect these results.
The primary attack of O 2 on the graphene-like layers -represented by the pyrene model molecule-can take place on edge C-H sites in a sequence of three edge C-H-sites as depicted in the energy diagram Fig. 19. 2 The attack of O 2 at internal carbon atoms or at edge carbon atoms or at C-H-sites in sequences with only two edge C-H-sites, which may result in different energy barriers, is not detailed here, because it is followed by a complex reaction system contributing little to the degradation of the polycyclic structure. The energy diagram illustrates the attack of O 2 on pyrene A4 via abstraction of a hydrogen from a C-H-site forming a pyrenyl radical, A4J + OOH. The H-abstraction occurs via transition state TS1 representing an energy barrier of 307.1 kJ mol −1 relative to pyrene and O 2 . The respective rate coefficient fitted to the format k 0 ⋅ exp − E a R⋅T is given in Table 3. The experimental reaction rate coefficients according to Eq. 4 are in the order of magnitude of k (4) ox ≈ 5 ⋅ 10 6 ⋅ exp − 150 R⋅T K −1 , see Table 2, with E a in kJ mol −1 . Considering the heating rate of 5 K min −1 , this results in rate coefficients at 900 K (which is the temperature range for maximum conversion rates) of k (3) ox(900K) ≈ 8 ⋅ 10 −4 s −1 . The corresponding time scale then amounts to about 20 min. The reaction rate coefficient of the H-abstraction, reaction (1), amounts to 1.1 ⋅ 10 12 ⋅ exp − 325.3 R⋅T cm 3 mol −1 s −1 . Assuming constant O 2 concentration of 5 %vol this results in k (1) (900K) ≈ 1 ⋅ 10 −13 s −1 . Compared with the measured rate coefficients for the oxidation of soot the rate coefficient for the activation of the graphene (32) 2.7 ⋅ 10 13 72.9 1.5 ⋅ 10 9 A3JYCO2 → A3YCJO2 (33) 6.4 ⋅ 10 12 310.6 6.0 ⋅ 10 −6 A3YCJO2 → A3J + CO 2 (34) 1.8 ⋅ 10 12 −147.4 6.6 ⋅ 10 20 1 3 surrogate molecule via the attack of O 2 is several orders of magnitude lower and seems too small to make this step appear as the predominant channel for the degradation of the model molecule. An alternative activation reaction would be the H-abstraction by radicals such as O, OH or OOH, see Table 3, reaction (2)-(4). The rate coefficient for e.g. the H-abstraction by O at 900 K is k (4) (900K) ≈ 4 ⋅ 10 8 s −1 . A concentration of O about 2.5 ⋅ 10 −11 mol cm −3 results in a time scale for this reaction of about 100 seconds, which is comparable to the experimental time scales. Figure 19 contains also the energy diagram for the attack of O 2 on the pyrenyl radical, A4J + O 2 → A4OOJ, reaction (5). which constitutes another possibility of activation. The rate coefficient for this reaction at 900 K is k (5) (900K) ≈ 7 ⋅ 10 1 cm 3 mol −1 s −1 and assuming constant O 2 concentration of 5 %vol this results in k (5) (900K) ≈ 5 ⋅ 10 −5 s −1 , which is is several orders of magnitude larger than k (1) (900K) . Depending on the history of soot during formation or finishing treatment, soot contains radical sites in variable density in the graphene-like structures (Yamanaka et al. 2005). Compared with the model molecule A4, where just one radical site A4J is viewed, graphene-like structures in soot primary particles contain multiple radical sites, so that the reaction rate of oxygen with radical sites in graphene-like structures in soot primary particles may be a multiple of the rate of reaction (5) resulting in reasonably smaller time scales. When lumping together the reaction rates of the three activation channels, reaction (1), reactions (2)-(4) and reaction (5), the resulting reaction rate comes close to the experimental ones.
The consecutive reaction of A4J with O 2 leads to A4OOJ providing three parallel pathways to A4JDO + O (reaction path a), A4JYC2O2 (reaction bath b) and A4OJDO (reaction path c), see Fig. 19. This opens pathways to further degradation via A4JDO + O 2 (a1), A4JDO + O (a2), A4OJDO and A4JYC2O2 (bc1). The energy diagrams for these reaction pathways are given in Figs. 20, 21, and 22. The respective reaction rate coefficients are contained in Table 3.
As can be seen in Fig. 20, the consecutive reactions in reaction path a1, reaction (9)-(13), end with the radical species A3J, which is a polycyclic radical with one ring less compared with A4J, and 2 molecules of CO. The bottleneck reaction of this sequence is reaction (9), see rate coefficients listed in Table 3. Assuming again constant O 2 concentration of 5 %vol the rate coefficient of this reaction is k (9) (900K) ≈ 5 ⋅ 10 −5 s −1 , which is by orders of magnitude larger than that of reaction (1). Assuming again multiple sites accessible for oxidation, the reaction sequence contributes considerably to the degradation of graphene-like structures.
The alternative pathways a2, reaction (14)-(21), see Fig. 21, and bc1 reaction (22)-(27), see Fig. 22, form the species A3CJ and CO. The corresponding rate coefficients of these reactions listed in Table 3 are also large compared with the rate coefficient of the primary activation reaction and illustrate a considerable contribution to the degradation of graphene-like structures. The species A3CJ is further converted by the attack of O and O 2 to A3J releasing CO and CO 2 via reaction path bc11 and bc12. The corresponding energy diagrams are given in Fig. 23 and the rate coefficients are also contained in Table 3. The follow-up reactions via reaction path a,b,c and a1, a2 in combination with bc1 and bc11, bc12 form a replication reaction scheme leading to the degradation of one six-membered ring after the other (AnJ → A(n − 1)J) releasing CO, CO 2 and also via reaction (6) oxygen atoms. In addition, the activation reaction (1) delivers OOH, being together with O a potential candidate for activating aromatic ring systems via reaction (2) and (4) with a low energy barrier. The discussion of the reaction paths presented here does not claim completeness. However, the developed activation/replication mechanism explains some experimental phenomena such as the increase in reactivity with increasing burn-out, the independent oxidation of differently reactive compartments in the soot or the increase in ordered structures at the expense of disordered/disturbed reactive structures, see Sect. 3.3. The primary attack of O 2 at graphene-like layers via reaction (1) at temperatures in the range of 900 K contributes only little because of the high activation energy causing extremely low reaction rates. In contrast to this an alternative activation reaction via radicals such as O, OH or OOH provides sufficiently high reaction rates at radical concentrations as low as 1 ⋅ 10 −11 mol cm −3 .
Reactions (1) and (6) in the sequence of follow up reactions delivers OOH and O, so that the oxidation of graphene-like structures includes formation of activating species. Additionally, O, OH and OOH may be produced via reactions in the gas phase. Another possibility of activation constitutes the reaction of O 2 with radical C-atoms requiring much lower activation energies, see reaction (5) in Table 3. The concentration of these "active" sites depends on the kind of soot and correlates with the method of synthesis. Flame generated soot and engine soot contain radical C-atoms in different concentration (Yamanaka et al. 2005) than e.g. thermally aged soot (P25, AC100) and exhibit, therefore, different reactivity. Graphene-like structures in soot primary particles contain more edge active sites than the model molecule A4J, which can be attacked in parallel multiplying the degradation rate of these structures.
In summary, the mechanism given in Table 3 describes an activation/replicating mechanism AnJ → A(n − 1)J where the rates of the follow-up reactions are sufficiently high compared with the initial activation of the graphene-like structure by O 2 . The self-activation would explain the independent oxidation of different containments within the primary particles. The oxidation rate depends on the initial concentration of radical C-atoms in the graphene-like structures in the soot and on the concentration of the radical pool of O, OH and OOH. Considering the different activation steps via the attack of O 2 and radicals at A4 and the attack of O 2 at AJ and their different reaction rates as discussed above, the experimental time scales for the oxidation can be reproduced with that replication/activation mechanism.
For deriving the reaction rate expression in Eq. 4 the density of "active" sites constituted by C-H-sites or radical sites was assumed to be proportional to the soot mass. For graphene-like structures, the ratio of edge C-H-sites to total carbon atoms in the graphene-like layers increases with decreasing extension of the layers. The density af active sites, therefore, increases with decreasing size of the layers enhancing the reaction rates for reactions of type (1)-(5). Then the reactivity increases with decreasing extension of the graphenelike layers, which is the case for progressing oxidation of the primary soot particles. Also, the ratio of C-H-sites to the total carbon atoms depends on the hydrogen content of the soot and generally a higher hydrogen content leads to higher reactivity (see Table 1). The same arguments hold for the degradation of graphene-like layers via the reaction of O 2 with disordered/disturbed structures in the primary particles such as large graphene-like structures partially equipped with functional groups, aliphatic side chains and aliphatic bridges. The reactivity will be higher, the higher the relative concentration of distorted/reactive structures within the graphene-like layers. In addition, the stability of a graphene-like layer with or without radical/active site increases with its extension. An extended ring system facilitates the stabilization of the intermediate stage by conjugation (hyper-conjugation).
Therefore, a distribution of activation energies for reactions (1)-(5) results depending on the stability of the activated graphene-like layers and thus on the layer extension. This supports again the findings of Sect. 3.3, that the reactivity increases with proceeding oxidation, viz. decreasing extension of the graphene-like layers.
While the investigation presented in this work identifies the predominant parameters determining the reactivity of "pure" soot the oxidation of soot in GPFs may be additionally affected by metals or metal oxides present in the emitted soot. This aspect opens the field of catalytic reactions of metal(oxides), which cannot be covered her. However, some principles developed in this work may be transferred also to catalytic oxidation, since recent studies show that initial graphene-like structures are shortened by adding catalytic additives (Rinkenburger et al. 2017).
Conclusions
The reactivity of soot from flames, soot from IC-engines, carbon blacks under oxidation conditions representative for GPF regeneration has been investigated. Soot reactivity is determined in dynamic TPO experiments and the soot primary particle nanostructure is investigated by HRTEM. Further, UV-visible spectroscopy and Raman scattering and other diagnostic techniques are used to study the properties connected to the reactivity of soot and to corroborate the experimental findings. It is found that nanostructural characteristics predominantly affect reactivity.
From the TPO experiments and the bulk properties of the soot samples no clear basic causes for the differences in reactivity of the different kinds of soot are obvious. Small soot primary particle sizes are not well correlated with high reactivity and also large specific surface area which correlates with small primary particle sizes causes different reactivity. Also the content of volatiles present in the different samples, the C/H-ratio and the oxygen content do not lead to comparable reactivity. Soot samples with alike bulk properties, e.g. P25, i-OCT1, A22 with a CMD of ≈ 30 nm or NGFG, C50_1200, C50_1600 with a BET of about 420 m 2 ⋅ g −2 are unalike with respect to reactivity (widely varying T max , temperature of maximum oxidation rate during TPO). Vice versa, soot samples with alike reactivity, e.g. P90, i-OCT3, P85, C50_1200 with T max ≈ 940 K are unalike with respect to bulk properties such as CMD or BET.
In contrast to this, the nanostructural properties clearly affect the reactivity of the investigated soot samples. The distribution of fringe lengths and separation distances in the primary particles exhibit a clear correlation to reactivity. The smaller the fringe lengths and the wider the distribution of the fringe separation distance, the higher the reactivity. Small fringe lengths are connected to wide distributions of the separation distance and vice versa. The nanostructure of the soot primary particles essentially determines their reactivity against oxidation and a simple reactivity-nanostructure relation linearly correlates the mean length of the fringes in the primary particles with T max , the temperature of maximum oxidation rate during TPO. UV-visible absorption spectra affirm qualitatively the structure-reactivity correlation.
The oxidation rates of soot samples with multiple T max can be reproduced by a linear combination of the oxidation rates of different soot samples with T max in the respective range. The conclusion from this behavior is that different compartments in the soot primary particles are oxidized independently governed by their reactivity. The extension of the fringe layers is the essential parameter describing the reactivity. Therefore, the nanostructure of soot primary particles containing different compartments with different reactivity can be composed also by a linear combination.
The reactivity of soot particles increases during oxidation. As the soot mass decreases due to oxidation the extension of the fringes decreases leading to an increase of reactivity according to the structure-reactivity correlation. This effect is more pronounced for the more reactive carbon blacks than for the less reactive ones. The size distribution of the primary particles hardly changes during stepwise oxidation suggesting an internal burning mode rather than a shrinking core mode under the employed conditions.
The mechanistic interpretation on the basis of quantum-chemistry estimations reveals a replication/activation mechanism which explains the experimental phenomena and the interpretation of reactivity on the basis of the nanostructural analyses. | 2020-09-03T09:03:06.332Z | 2020-08-29T00:00:00.000 | {
"year": 2020,
"sha1": "9fc736e39003aa081d32237dd641b661c8d22ec7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10494-020-00205-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "1d99d49eb2105bdc4cb708092ff13a03efcc0d6c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
9079506 | pes2o/s2orc | v3-fos-license | Yes, research can inform health policy; but can we bridge the 'Do-Knowing It's Been Done' gap?
This editorial introduces a new Supplement in Health Research Policy and Systems and highlighs the importance of assessing the impact of health research by examining whether we can move from 'Know-Do' to 'Do-Knowing It's Been Done'
Health Research Policy and Systems (HARPS) is publishing a supplement of papers that is well timed as it includes accounts of various strategies used by research organisations to strengthen the research to policy and practice interface [1]. It comes as the World Health Organization (WHO) plans for the 2012 edition of its flagship publication, the World Health Report, which will focus on the role of research in improving the health status of populations [2].
The forthcoming World Health Report, to be entitled: 'No Health Without Research', reflects an ever-growing focus on the vital role of health research, and how best to bridge the 'Know-Do' gap. In 1990 the independent Commission on Health Research for Development published a landmark report, Health Research: Essential Link to Equity in Development [3]. The WHO has been playing an increasingly important part in promoting the role of health research. It organised the Mexico ministerial summit on health research [4] and the accompanying World Report on Knowledge for Better Health: Strengthening Health Systems [5]. That was followed by the second ministerial summit at Bamako [6] and the First Global Symposium on Health Systems Research organised by the WHO/Alliance for Health Policy and Systems Research at Montreux in November 2010.
The specific role of health research in informing health policies has always been a major part of the analysis about the importance of health research [7]. In 2003 HARPS published a review and analysis of the topic [8] that had been undertaken as part of the leadup to the Mexico summit. That paper made an early claim that, 'A full review of the many possible meanings of research impact reveals that there may be more utilisation in policymaking than is sometimes recognised.' [8] The various overlapping themes in the literature include: 1. promoting the greater use of research and identifying the facilitators of and barriers to research making an impact on policy, which is sometimes framed as part of the debate about how best to bridge the 'Know-Do' gap; 2. describing specific attempts to enhance the impact made by research on policy; 3. one-off explorations of how far research has informed health policies in specific cases; and 4. developing systematic methods to assess and monitor the impact made by health research on policies, which could seen as addressing what we are calling the 'Do-Knowing It's Been Done' gap.
Various studies address these themes, often in overlapping ways. Whilst interest and activities have been intensifying in the last decade, it is important to recognise there have been major long-standing attempts within some health research systems to develop approaches in which policymakers and researchers work together to identify priorities for research that will meet the needs of policymakers. A large-scale formative evaluation by Maurice Kogan and Mary Henkel of one attempt in the English health department's R&D system was published as early as 1983 [9]. It highlighted many of the difficulties in getting policymakers and researchers to develop the productive long-term relationships to improve the impact. It also, however, developed many of the concepts that are now used more widely, such as the importance of the collaborative approach, the role of knowledge brokers and the role of receptor bodies. A second edition [10] highlighted the continuing attempts to address the issue in the English health research system, and a subsequent paper in HARPS provided a full account of how 30 years of reform has resulted in a health research system that has had successes in meeting the needs of various stakeholders, including some policymakers [11].
The recent contribution made by HARPS to the evolving analysis and debates
In HARPS we have attempted to contribute to all of the four themes outlined above by publishing a range of relevant papers, series and supplements. In 2009 we published a supplement called SUPPORT Tools for evidence-informed health Policymaking (STP) which consisted of a series of guides on how to increase the impact of research on policy with an introduction by John Lavis, Andy Oxman and colleagues [12]. This is a major attempt to reduce the 'Know-Do' gap and, despite the various examples of research making an impact, there is clearly much work still to do to reduce that gap. In turn, that series helped inform an innovative approach from Melissa Pearson and colleagues whose article in HARPS combines policy sciences traditions with the focus on pathways provided by the SUPPORT tools to promote evidence informed policymaking. These combine to facilitate prospective policy analysis that informed policymaking on intentional self poisoning in Sri Lanka [13].
In May 2010 a symposium was held at Harvard University, Boston, USA, to mark the 20 th anniversary of the report from the Commission on Health Research for Development. Julio Frenk and Lincoln Chen wrote a Commentary on the symposium that was published in HARPS. Frenk and Chen observe that the participants 'underscored the imperative that knowledge be translated into evidence that can guide policy and implementation.' [14]. At a more specific level a paper in HARPS by Rajeev Gupta and colleagues [15] highlights the body of evidence that should be translated into policy for cardiovascular disease control in India.
The interest in many countries in the field of increasing research use is illustrated by a range of papers published in HARPS in the last 16 months. One paper considers the role being played by the print media in 44 countries in Africa, the Americas, Asia, and the Eastern Mediterranean as one dimension of the climate for evidence-informed health systems [16]. Two linked papers describe surveys used to gather opinions about bridging the gaps between research, policy and practice in 10 low and middle income countries [17,18]. Various initiatives in this field from The Netherlands have also been reported, including an exploration of barriers between epidemiological research and local health policy formation [19] and an approach for assessing the use (including impact on policy) of research produced by one of the Dutch university medical centres [20]. A further paper describes a framework for developing an evidence-based comprehensive tobacco control program in Israel [21].
A key paper in HARPS by Godfrey Woelk and colleagues from December 2009 won that year's prize in the Medicine category for the best groundbreaking research published in any of BioMed Central's journals: Translating research into policy: lessons learned from eclampsia treatment and malaria control in three southern African countries [22]. This success is another indication of the increasing importance of the topic, and the paper provides further important examples of the use of research in health policy and an insightful analysis of barriers and facilitators.
As noted, there have been various other one-off, and/ or small scale, studies on the impact of research on policy; some indicating a high level of influence. These include studies of health technology assessments in Quebec, Canada [23] and analysis of 44 operations research projects aiming to improve reproductive health services in Guatemala [24].
It's time to recognise the increasingly important role played by health research
Building on our own paper in 2003 [8], as editors of HARPS we have been pushing the case for increasing recognition that health research does impact on policy more frequently than is often acknowledged [25]. Of course not all heath research can make an impact on policy, nor should this be expected. However, whilst it is important that a major focus is maintained on bridging the 'Know-Do' gap, it is also time that more attention was paid to the 'Do-Knowing It's Been Done' gap so as to ensure that evidence is captured about when and how health research makes an impact on policy.
Developing robust techniques to assess the impact of health research has been recognised in the 1990s as being important for various reasons: it provides accountability for funds spent, justification for future spending and also helps identify ways to organise health research so as to achieve greater impacts in the future [26]. If real progress is to be made in evaluating the different mechanisms used to increase the use of research, it could be argued that it is important to know what impact has been made by the research that is translated into action. As a corollary to the collaborative approach developed by Kogan and Henkel [9], there is recognition of the need to focus on issues at the interfaces between policymakers and researchers as a way of helping to understand how the impact on policy has come about [26,27]. The Payback Framework developed in the mid 1990s by Buxton and Hanney [26], and elaborated in an article in HARPS [27], addresses these concerns. It incorporates consideration of the permeability of the interfaces between the research system and the wider political and healthcare systems. The issue of permeability at the interfaces includes questions about how far the wider healthcare, political and social systems can collaborate with researchers to produce an agenda that will engage researchers, and how far the findings from research can make an impact on the wider systems. The Payback Framework consists not only a multi-dimensional categorisation of benefits from research, but also a model of the processes of research production and use that can help in assessing the benefits achieved [26]. In this approach, therefore, the analysis of the value of the interface mechanisms used to help achieve impacts is informed by the assessment of the actual impacts that arise from the translation of the research.
Such considerations could be important in progressing effective implementation of the framework developed by Lavis and colleagues [28] for evaluating what has been done to promote efforts to link research to action. Their framework covers a wide range of mechanisms that might be used, including: push efforts by producers of research; user pull efforts; exchange efforts involving researchers and users working together in ways such as through the use of knowledge brokers; and integrated efforts. Their framework also recognises the importance of evaluating such mechanisms. Approaches such as the Payback Framework [26] provide a way to assess the wider impacts of the research that is translated through the various possible translation mechanisms available. Therefore, these ways of assessing the wider impacts might assist attempts to evaluate the effectiveness of the various translation mechanisms.
Whereas some attempts to assess the impact of research look just at policy, other frameworks, such as the Payback Framework, include impact on policy as part of a multi-dimensional categorisation of benefits that also includes health and health equity benefits as well as broader economic benefits [26]. Indeed, establishing the impact on policy (especially using a broad definition to include clinical policies) can be seen as a key factor in helping to identify the wider impacts [29].
To demonstrate when and how research has an impact on policy, studies can either start with research and trace the impact forwards, or start with policies and attempt to trace the impact backwards to the relevant research that might have influenced the policy. In our 2003 review we suggested that the evidence indicated it could be less difficult to trace the impact forwards than it was to work backwards, and this opinion was strengthened by a review in 2007 [29]. Whilst it might be more difficult to trace policy impacts back to specific pieces of research, there is increasing evidence of policymakers acknowledging research can inform their decision-making. In a recent study of national policymakers in six countries Adnan Hyder and colleagues show that whilst there are various barriers to the use of research, the policymakers interviewed, 'were unequivocal in their support for health research and the high value they attribute to it' [30]. Several issues will have to be addressed in any attempts to put greater emphasis on bridging the 'Do-Knowing It's Been Done' gap. There are the different, although related, processes of using specific research results through commissioning or pushing primary research, and using secondary research through reviews and synthesis. Indeed, there is sometimes a lack of clarity in the literature about whether the emphasis is on enhancing (and assessing) the use of the findings produced by researchers within the local healthcare/ research system, or on enhancing (and assessing) the use of the relevant parts of the global body of health research. Clearly both activities are important, and there is an increasingly diverse range of approaches used for pulling together locally generated and synthesised global knowledge in a way that is most appropriate for policymakers in specific countries. Furthermore, there are overlaps in that a collaborative approach might be as valuable in getting policymakers to pay attention to secondary research as it is with primary research. However, if the case is to made for funding local primary research in low and middle income countries (because, for example, its findings are more likely to be relevant to policymakers in those countries), then it is important that sufficient attention is given to assessing the impact of such research on policy.
Extending the analysis: a new supplement in HARPS
HARPS is now publishing a supplement consisting of a diverse range of papers first presented at a conference on getting research into policy and practice in the field of sexual and reproductive health (SRH) and HIV/AIDS. These papers cover a wide range of topics, many of which are related to the main themes identified above, including the importance of focussing on ways not only of enhancing the impact of research (on policy and practice) but also of demonstrating that impact has been achieved. In an introductory paper Sally Theobald and colleagues state: The contributors to this supplement provide a body of critical analysis of communications and engagement strategies across the spectrum of SRH and HIV/AIDS research through the testing of different models for the research-to-policy interface. They provide new insights on how researchers and communication specialists can respond to changing policy climates to create windows of opportunity for influence [1].
Here we present a flavour of the wide range of approaches and topics described by giving a brief outline of key points from three contrasting papers. Eleanor Hutchinson and colleagues examine national policymaking for cotrimoxazole as a preventive therapy for HIV infected individuals in Malawi, Uganda and Zambia [31]. The approach adopted by the authors was informed by a recent overview of the health policy literature in low and middle income countries. That review concluded with a call for analyses which consist of comparative, multi-country studies using rigorous case studies which deliberately seek to explain health policy changes in these settings [32]. Hutchinson and colleagues identify several factors that influenced the variable impact of the research in the different countries, and observe that while the findings from randomised controlled trials were not necessarily translated into policy so swiftly, 'local operational research results seem to have been taken up more quickly' [31].
Rose Oronje and colleagues [33] use a case study approach to describe the experiences of the African Population and Health Research Center in Nairobi, Kenya, and its partners, in cultivating the interest and building the capacity of the media in evidence-based reporting of reproductive health issues in sub-Saharan Africa. They conclude that the media can play a valuable role in communicating important research findings and raising the profile of overlooked and contentious public health issues to the public, including political leaders, policymakers and key stakeholders [33].
Alan Whiteside and Fiona Henry [34] examine how, where and why there was a considerable impact made by the 2007 report on the HIV and AIDS epidemic in Swaziland entitled Reviewing 'Emergencies' for Swaziland: Shifting the Paradigm in a New Era [35]. Adopting the approach of tracing the impact forwards from the original research, as described above, they explore how following a targeted communications effort, that report succeeded in raising the profile of the epidemic as a humanitarian emergency requiring urgent action from international organisations, donors, and governments. In the literature on assessing the impacts made by research on health policies it has been stressed that the quality of the research can be seen as an important factor in achieving impacts [8,26], and this is well-illustrated in Whitehead and Henry's conclusion that 'The credibility of both evidence and researcher play an important role in the use of research.' [34]. Finally, the authors end with a key observation that not only did the original report achieve many of its goals and spur an international dialogue around the issues, but also the evaluation of the report's impact provides, 'additional lessons, which can be applied to help maximise the impact of research in the future.' [34].
So, the new supplement in HARPS makes significant additions to the growing body of literature, from HARPS and elsewhere, that research can inform health policies, that there are various barriers and facilitators that should be analysed, and that it is also important to expand the analysis and bridge the 'Do-Knowing It's Been Done' gap. | 2014-10-01T00:00:00.000Z | 2011-06-16T00:00:00.000 | {
"year": 2011,
"sha1": "ee1bcd6ce10928af352711d66906b7e18c10065c",
"oa_license": "CCBY",
"oa_url": "https://health-policy-systems.biomedcentral.com/track/pdf/10.1186/1478-4505-9-23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee1bcd6ce10928af352711d66906b7e18c10065c",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3884030 | pes2o/s2orc | v3-fos-license | Influence of the Nitrogen Content on the Carbide Transformation of AISI M42 High-Speed Steels during Annealing
Attempts were made to elucidate the effect of nitrogen on primary eutectic carbides in as-cast and annealed AISI M42 high-speed steel. Particular emphasis was placed on the transformation of carbides during forging and annealing in steels with different nitrogen concentrations and the influence of final carbides on the impact toughness of the steel. Microstructural observation, electrolytic extraction method, X-ray diffraction analysis, automated inclusion analysis (INCASteel), and impact toughness measurement combined with fractographic observation were conducted on the specimens. Primary M2C carbides were found to be dominant precipitates in the as-cast ingot, with a certain amount of V(C,N). Nitrogen addition promoted the formation of fibrous M2C, whereas lamellar M2C predominated in M42 steel with a low nitrogen concentration (w[N]% = 0.006). Fibrous carbides M2C tend to decompose into more stable carbides M6C and MC during forging and annealing compared to lamellar M2C. Nitrogen alloying only affected the morphologies and dimensions of carbides, but did not change the types of carbides. These improvements in the dimensions and fractions of carbides naturally increased the impact toughness of annealed steel. Hence, it was suggested that the addition of nitrogen to AISI M42 high-speed steel was required to achieve homogeneous distribution of carbides and sufficient impact toughness.
However, systematic research on the effect of nitrogen on the precipitation and subsequent transformation of eutectic carbides in high-speed steel is still deficient.
The aim of this study was to investigate the effect of nitrogen on the characteristics of eutectic carbides in as-cast AISI M42 high-speed steel and to evaluate the transformation of eutectic carbides in annealed steel. This study also identifies the correlation between carbide precipitates and the impact toughness of steel. Two electroslag remelting (ESR) ingots were fabricated with different nitrogen equivalents, and the effects on fracture properties were investigated by observing the fracture surface of annealed steel. Electrolytic extraction method, X-ray diffraction, and automated inclusion analysis (INCASteel) were employed to ascertain the microstructure, composition, dimension and fraction of carbides.
Experimental Procedures
Material and Specimen Preparation. Materials used in this study are two AISI M42 HSS ingots manufactured by vacuum induction melting followed by the protective gas electroslag remelting (P-ESR) method, and their chemical compositions are listed in Table 1. The first ESR ingot is the standard AISI M42 HSS, which is denoted as M42. The other one, denoted as M42-N, has been alloyed with nitrogen. In the case of the nitrogen-alloyed steel grade, V-N alloy and ferroalloys (Fe-Cr, Fe-Mn, Fe-W) were used to adjust the chemical compositions. The ESR ingot was 100 mm in diameter and 300 mm in length.
An inductively coupled plasma optical emission spectroscopy (ICP-OES) was employed to analyse the contents of alloy elements W, Mn, Co, V, and Mo in the ESR ingots. The nitrogen content was measured by the inert gas fusion-thermal conductivity method.
Annealing Treatment. The as-cast ESR ingots were forged after being held at 1373 K for 120 min. The specimens used in the following annealing experiment were taken from the forged steel with dimensions of 30 × 30 × 100 mm and annealed in a vacuum induction furnace. The specimens were first heated to 1153 K for 120 min, followed by cooling to 773 K at a rate of 50 K/h. Then the specimens were left in the furnace and cooled to ambient temperature in air.
Microstructural Analysis. The specimens prepared for the observation of eutectic carbide formation and transformation were taken from the as-cast ESR ingots and annealed steels, respectively. The specimens were diamond polished and examined by a scanning electron microscope (SEM). The types and compositions of the carbides were analysed using energy-dispersive spectroscopy (EDS) and X-ray diffraction analysis. Automated inclusion analysis (INCASteel) was employed to measure the size distribution and volume fraction of the carbide precipitates. INCASteel could detect carbides with a minimum size of 0.587 μm in the range of 10 mm 2 , and measure the equivalent diameter of each carbide in the field of view.
Carbide Collection and Analysis. In order to identify the three-dimensional morphology and phase compositions of the eutectic carbides, an electrolytic extraction method was performed to collect carbide precipitates in the ESR ingots. A plate sample with dimensions of 20 × 80 × 3 mm was used as a cathode and was electrolyzed in organic solution (1% tetramethylammonium chloride +10% ethylene acetone methanol solution). The total current was controlled to be less than 0.6 A, and the current density was 0.004 to 0.006 A/cm 2 . The temperature of the electrolyte was kept between 0 and −5 °C. After electrolysis, the carbide precipitates were collected and cleaned with 10 g/L citric acid alcohol and distilled water. These dried carbide precipitates were observed using SEM. X-ray diffraction analysis was also employed to identify the phases of the collected carbides. Impact Toughness Testing. The impact toughness was obtained for the annealed specimens, whose dimensions were 10 × 10 × 55 mm with a notch, in a JB-W300A test machine. The value of the impact energy is an average of three measurements. Fracture surfaces of the fracture toughness specimens were examined using SEM.
Results and Discussions
Precipitate Formation in AISI M42 HSS Calculated Using Thermo-Calc Based on Different Nitrogen Content. The types of carbides and carbonitrides present in conventional and nitrogen containing steel were calculated using Thermo-Calc software (TCFE7 database), as shown in Fig. 1. Three different types of carbides were found to precipitate, i.e., M 6 C, M 7 C 3 and M 23 C 6 , in AISI M42 HSS under the conditions of equilibrium solidification and cooling. First, M 6 C-type carbides precipitate at approximately 1540 K, Whereas, M 7 C 3 and M 23 C 6 carbides precipitate successively at 1300 K and 1030 K, respectively ( Fig. 1(a)). The type of carbonitrides is mainly V(C,N) because vanadium is the strongest nitride former in AISI M42 HSS. The precipitation temperature of V(C,N) increases with the increase of nitrogen content, while the precipitation temperatures of other carbides are not greatly affected by the change of nitrogen content ( Fig. 1(b)). In conventional M42 steel with a nitrogen content of 0.006%, V(C,N) appears at 1500 K after M 6 C carbides. When the nitrogen content increases to 0.011%, V(C,N) precipitates occur preferentially compared to carbide formation with a precipitation temperature of 1550 K. In addition, a kind of HCP phase is also precipitates during the equilibrium solidification of AISI M42 steel. Figure 2 presents the change in the element contents of HCP phase as a function of temperature. According Fig. 2(a) and (b), HCP phase is a Mo rich close-packed hexagonal structure which composed of a small amount of V and C. Changes in nitrogen contents have little impact on the composition of HCP phase, but they apparently affect the precipitation temperature. In conventional M42 steel with the nitrogen content of 0.006%, HCP phase starts to precipitate at 1393 K ( Fig. 2(a)). When the nitrogen content increases to 0.011%, the precipitation temperature decreases to 1326 K ( Fig. 2(b)). Consequently, the precipitation temperature of HCP phase decreases with the increasing nitrogen content. Under the condition of non-equilibrium solidification during production, the AISI M42 HSS has a ledeburitic microstructure that consists of ferrite and large primary M 2 C or M 6 C carbides 6,7,[16][17][18] . The type of carbide is eutectic metastable M 2 C or stable M 6 C, depending on the chemical composition and solidification conditions of the steel.
Effect of Nitrogen as an Alloying Element on the As-cast Structure. A series of backscattered electron images of the microstructure of the ESR ingots with different nitrogen contents are shown in Fig. 3. The as-cast structure of AISI M42 HSS can be divided into two types of constructions: a dendritic matrix of ferrite with small carbides and a ledeburite of carbides and ferrite. A previous study 19 demonstrated that the smaller carbides were mostly V-rich MC or M(C,N), while the large primary carbides were Mo-rich M 2 C. In conventional M42 steel, the morphology of M 2 C carbides presents a lamellar shape with dimensions of 5 to 20 μm in length and 1 to 2 μm in width ( Fig. 3(a)). In the case of nitrogen alloying, the M 2 C carbides become thinner and develop into a fibrous shape. Meanwhile, the distance between the carbides in ledeburite turns narrow ( Fig. 3(b)). Table 2 lists the content of alloying elements in M 2 C carbides of different shapes analysed by EDS. It is found that the content of alloying elements in lamellar M 2 C is more abundant than that in fibrous M 2 C. This means that the alloying elements in the matrix increase with the increase of nitrogen content in the steel. Figure 4 shows the SEM micrographs and EDS spectrum of the carbides extracted from the conventional and nitrogen-alloyed ESR ingots. The average compositions of the observed carbides analysed by EDS are listed in Table 3. From Fig. 4(a) and (b), it is clear that the morphology of carbides in the conventional M42 as-cast ingot presents a distinct difference from that of carbides after nitrogen treatment. Carbides electrolytically extracted from conventional ESR specimens are mainly angular polyhedrons, which easily give rise to stress concentration. With the increase of nitrogen content in the steel, both the morphology and the dimension of carbide precipitates have been modified to a large extent. Figure 5 shows the three-dimensional morphology of typical individual carbides observed in the specimens. Coarse carbides are extracted in bulky form from conventional ESR ingots ( Fig. 5(a) and (b)). In contrast, the carbides in nitrogen-alloyed steel present fibrous or honeycomb morphology (Fig. 5(c) and (d)).
According to EDS analysis, as shown in Fig. 4(c) and Table 3, the observed carbides contain mainly Mo, with a trace amount of Cr, V, Co, W, and Fe atoms. There is no palpable difference in the compositions of alloying elements in the carbides between conventional and nitrogen alloyed ESR ingots. However, the concentrations of alloying elements in the carbides are higher in specimen M42 compared with specimen M42-N. This phenomenon is in reasonable agreement with the measurement result in Table 2. As a consequence, increasing the nitrogen content promotes the optimization of the morphology of carbides and decreases the concentration of alloying elements in carbides. The X-ray diffraction analysis data of the carbide precipitations are shown in Fig. 6(a) and (b). The presence of M 2 C and V(C,N) is detected in both carbide precipitates. This result indicated that nitrogen addition does not change the type of carbides in the ESR ingot, as confirmed by SEM and EDS research.
Effect of Nitrogen on the Carbide Transformation and Impact Toughness of Annealed Steel.
M 2 C-type carbide is metastable and decomposes into M 6 C and MC during forging and annealing of the steel [20][21][22] . Compared with M 6 C, MC is rich in vanadium, the amount of which depends on the vanadium concentration in high-speed steel. It is important to point out that V-rich MC usually exists in composite carbonitride V(C,N), rather than the monocrystalline carbide in nitrogen-containing steels. Meanwhile, VC and V(C,N) have the same crystal structure, and the lattice constants are very close. Accordingly, the V-rich MC obtained by carbide transformation was also regarded as V(C,N) in this study, as X-ray diffraction determination of the precipitates was based on their crystallography and not their composition.
The microstructures of the M42 and M42-N specimens after forging and annealing treatment are shown in Fig. 7. It is clear that the continuous network distribution of eutectic carbides changed into a broken network in each annealed specimen. The ratios of the number of carbides of different sizes to the total number of carbides in both specimens were evaluated by INCASteel and shown in Fig. 8. In conventional M42 steel, 32 percent of carbides in total precipitates are less than 2 μm in size, and the ratio decreases with the increase of the dimension of statistical carbides. Moreover, carbides larger than 14 μm also account for approximately 5 percent. In the case of nitrogen alloying, the proportion of carbides smaller than 2 μm shows an increase of 20 percent over conventional M42 steel. More than 90 percent of carbides are less than 10 μm. The statistical results indicate that fibrous M 2 C carbides more easily decompose into small and uniform carbides M 6 C and MC.
In order to identify the type of carbide in both specimens, the carbide precipitates were electrolytically isolated from the steel matrix and analysed by X-ray diffraction. Figure 9 shows the X-ray diffraction patterns of the carbides in each specimen. The presence of M 6 C, V(C,N), and M 7 C 3 was detected in M42 and M42-N specimens after annealing, while M 2 C-type carbides were also detected in conventional high-speed steel. This phenomenon could be explained by the fact that the carbide transformation (M 2 C + matrix → M 6 C + MC) starts at the interface between the matrix and the M 2 C particles, and moves towards the centre of the carbide 18,23 . Hence, the lamellar carbides decompose incompletely, and M 2 C remains in the centre of the carbide particles. This proves that the thermal stability of lamellar M 2 C is stronger than that of fibrous M 2 C, which is confirmed by the SEM and INCASteel research (Figs 7 and 8).
The carbide precipitates were classified by type through the fractionation method and evaluated by ICP-OES. To separate different types of carbides, the carbide powder was insulated in aqueous solution (6% H 2 SO 4 + 20% H 2 O 2 + 1% citric acid) in boiling water bath until M 6 C-type carbides and V(C,N) were completely dissolved. The M 7 C 3 -type carbides were insoluble in the aqueous solution and separated from the precipitated powder. The average compositions of M 7 C 3 -type carbides and M 6 C-type carbides in conventional and nitrogen containing specimens are shown in Tables 4 and 5. The compositions of M 6 C also contain V(C,N), because the chemical properties of these two kinds of precipitates are similar, they are difficult to separate by fractionation. As shown in Table 4, there is no perceptible difference in the concentrations of alloying elements in M 7 C 3 -type carbides in different specimens. M 7 C 3 contains mainly Cr and Fe, with minor amounts of Co, Mo, and W atoms. M 6 C carbides are Mo-rich carbides containing Fe and a trace amount of Cr, W, and Co (Table 5). It is worth noting that N prefers to exist as precipitates rather than as a solid solution in conventional M42 steel or nitrogen-alloyed M42 steel.
The impact energies of conventional and nitrogen containing M42 steels after annealing are 8.1 ± 0.2 J and 12.6 ± 0.6 J, respectively. The increase in the amount of nitrogen was confirmed to improve the impact toughness of the steel. Figure 10 shows the fractography of AISI M42 high-speed steel specimens with different nitrogen contents after forging and annealing. On the fracture surface of conventional M42 steel, the cleavage fracture mode is predominant, and the ductile facture mode is hardly found (Fig. 10(a)). A number of large primary carbides are pulled out or broken to form cleavage fracture facets ( Fig. 10(b)). In the case of nitrogen-alloyed M42 steel, fine spherical carbides were observed to locate at the fracture surface, as shown in Fig. 10(c). Spherical carbides were presented as cluster due to higher concentration of alloying elements at these regions ( Fig. 10(d)). Moreover, the signs of plastic deformation on the fracture surface of nitrogen-alloyed M42 steel are much more remarkable. Because most coarse carbides in high-speed steels are located along the cell boundaries and are much harder than the matrix, microcracks initiate along the crystal boundaries and are facilitated by these coarse carbides [24][25][26][27] . The size, fraction and distribution of carbides located in the intercellular region determine the fracture behaviour of the steel. Consequently, impact toughness could be enhanced when the amount of carbides is smaller and more uniform. It is confirmed from Figs 7 and 8 that impact toughness increases as the volume fraction of fine carbides increases.
Conclusions
In this study, the effects of nitrogen on the microstructure, primary carbides, and toughness properties of AISI M42 high-speed steel were investigated. The main conclusions are summarized as follows: 1. The as-cast high-speed steel AISI M42 consisted of unstable primary and eutectic carbides M 2 C and carbonitride V(C,N) in the matrix. The structure of M 2 C carbide is very sensitive to the nitrogen content of high-speed steel. M 2 C presents a lamellar shape in steel with a low nitrogen content of 0.006%, whereas a fibrous shape is observed as the nitrogen content increases to 0.011%. 2. Forging and annealing resulted in a partial transformation of the lamellar M 2 C carbides into small and stable carbides of M 6 C and MC, with an associated change in the crystal orientation. This carbide transformation could be more complete for fibrous M 2 C in the nitrogen-containing (w[N] % = 0.01) M42 steel. 3. Increasing nitrogen content in AISI M42 high-speed steel improves both carbide morphology and dimensions but does not change the types of carbides. Controlling eutectic M 2 C carbides to have a fibrous shape by nitrogen alloying of as-cast ingots is propitious to preventing large undecomposed residual eutectic carbides after annealing and increases the impact energies of annealed steel accordingly. 4. According to the apparent impact toughness research, toughness was determined by the dimension and distribution of carbides. High impact toughness derives from the fine uniform carbides that are refined by nitrogen.
Data availability statement. All data generated or analysed during this study are included in this published article (and its Supplementary Information files). | 2018-04-03T04:41:03.571Z | 2018-03-12T00:00:00.000 | {
"year": 2018,
"sha1": "5ec37f56ba9b40ed94cb6c0b8928a4548b94c7fd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-22461-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2fc50ef770dcff9104082ee8e936b39148923fc8",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
532012 | pes2o/s2orc | v3-fos-license | Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.
Introduction
In the past, globalization has often been seen as a more or less economic process characterized by increased deregulated trade, electronic communication, and capital mobility. However, globalization is becoming increasingly perceived as a more comprehensive phenomenon that is shaped by a multitude of factors and events, and that is reshaping our society rapidly; it encompasses not only economic, political, and technological forces, but also socialcultural and environmental aspects. This increased global economic integration, global forms of governance, and globally inter-linked social and environmental developments are often referred to as globalization. However, depending on the researcher or commentator, globalization is interpreted as growing integration of markets and nation-states and the spread of technological advancements [1]; receding geographical constraints on social and cultural arrangements [2]; the increased dissemination of ideas and technologies [3]; the threat to national sovereignty by trans-national actors [4]; or the transformation of the economic, political and cultural foundations of societies [5]. In our view, globalization is an overarching process encompassing many different processes that take place simultaneously in a variety of domains (e.g., governance structures, markets, communication, mobility, cultural interactions, and environmental change). The pluralistic definition of globalization by Rennen and Martens [6] offers a conceptualization capturing the complexity of different dimensions;, processes; scale-levels; and linkages and pathways; characterizing the relationship between globalization and health. Hence, contemporary globalization is defined as the intensification of crossnational interactions that promote the establishment of trans-national structures and the global integration of cultural, economic, ecological, political, technological and social processes on global, supra-national, national, regional and local levels [6].
Looking at the health of populations, Martens [7] and Huynen [8], amongst others, argue that changes in drivers of disease are brought about not only by economic changes, but also by changes in the social, political, and environmental domains at local, regional, and global levels. Health improvements experienced in developed countries over the past centuries are mainly vested in social and environmental changes, whereas more recent health improvements in developing countries can be broadly related to knowledge transfer and socio-cultural determinants. Nowadays, global processes influence all these important health determinants. Hence, globalization and its underlying processes have brought about vast changes in both health determinants and related health outcomes. As a result, the geographical scale of important health issues is significantly increasing [9]. The link between global mobility and the spread of infectious diseases is perhaps the best-known health effect of globalization. However, it is only one of the many possible health implications of globalization. Many scholars have tried to conceptualize the possible linkages between globalization and health. Woodward et al. [10], for example, propose a framework based on three component circular processes of globalization: openness; cross-border flows; and rules and institutions. However, their conceptualization mainly focused on the health effects of economic globalization. Labonte and Torgerson [11] review different conceptualizations of the globalization-health relationship, resulting in a diagrammatical synthesis that mainly focuses on governmental policy changes as well as economic determinants of health, but with the inclusion of an environmental pathway. Hence, many of these approaches primarily emphasize the economic and institutional side of globalization, defining globalization in a rather narrow way. Labonte and Schrecker [12,13] took a somewhat different approach in their framework for the Commission of Social Determinants of Health, conceptualizing how globalization affects disparities in access to social determinants of health.
Because of the multitude of underlying processes shaping the globalization-health link, ideas about globalization, health determinants and possible outcomes should be broadened. The causality of human health is multi-factorial and many population health problems are invariably embedded in a global context [8]. Taking this broader view on globalization and global health, Huynen et al. [9] developed an integrated conceptual framework for the health implications of globalization. We can conclude that a variety of both negative and positive effects are expected to influence our health in the (near) future [8,9] (see Table 1 for examples), but it is still very uncertain what the overall health outcomes will be. Academic literature shows an ongoing polarized debate [14]. The limited empirical evidence on the multiple links between globalization and health poses a problem [15]. Many scholars urge for elaboration and possible quantitative evidence to support the hypothesized relationships [9,10,[14][15][16][17][18][19][20][21]. In this paper we try to answer the question if the process of globalization improves the health of populations (or not).
Methodology
In this paper we use an indicator-based approach [22] linking the Maastricht Globalization Index (MGI) (a measure of globalization) to important health indicators, correcting for possible confounding factors. The MGI as well as the selected health indicators and confounders will be discussed in the following sections. Subsequently, the performed statistical analyses will be clarified.
The Maastricht Globalization Index
In this section, we briefly describe the Maastricht Globalization Index (MGI) [22]. The MGI was developed by Martens and Zywiets [23] and Martens and Raza [24] to improve upon existing globalization-indices. The need for a balance between broad coverage, data availability and quality motivated the following choice of indicators (see Table 2), with data for 117 countries (see Figure 1).
The MGI is constructed in a four-stage process (see also [25]). The first stage is conceptual and choices are made about which variables are most relevant and should be included in the index. In the second stage, suitable quantitative measures are identified for these variables. In the third stage, following [26], each variable is transformed to an index with a zero to one hundred scale (this differs from earlier calculations constructing the MGI [23]). Higher values denote more globalization. The data are then transformed-on the domain levelaccording to the percentiles of the base year (2000) distribution (using the formula ((V i -V min )/(V max -V min ) × 100). In the last and final stage, a weighted sum of the measures is calculated to produce the final score, which is then used to rank and compare countries. The "most globalized" country has the highest score. Within each domain, every variable is equally weighted. The MGI scores are simply added, i.e., all domains receive the same weight. In this paper, we use the MGI calculated for 2008 [27].
Several limitations in using the MGI (and in general globalizations indices) exist. Since there are missing data on the share of international linkages that are regional rather than global, it is impossible to distinguish globalization from internationalisation and regionalisation with complete certainty. Therefore, there is an underlying assumption that countries with many international links have a correspondingly greater number of global linkages. As expected, international statistics on eleven different indicators ranging from politics and military to the environment have widely varying degrees of data quality, reflecting the different capabilities and priorities of the organisations collecting the data. Of particular concern are the domains in which the underlying data have not been collected by official international bodies like the World Bank, IMF and/or other UN organizations, but by private or semi-public organisations. In addition, many countries are reluctant to share information about activities related to their national security, which creates data gaps that are not easily filled.
The fact that countries with fewer international linkages tend to publish less data and are less likely to be included in international statistics biases against states that are less globalized [28]. Additionally, despite being members of the UN and most other international bodies, countries with totalitarian or communist regimes (e.g., North Korea, Cuba) are often excluded in international financial statistics. Therefore, this also leads to their exclusion due to lack of data. Finally, yet importantly, countries that are too small to collect internationally coherent statistics and/or are strongly integrated into the economies of their big neighbours (e.g., Luxembourg, Monaco, and Swaziland) are also missing from the statistics and therefore excluded from the MGI.
Both the sensitivity to extreme values and year-to-year variations are a major concern for the robustness of other indices for globalization. With the methodology used to construct the MGI, the sensitivity of the index to extreme values has been sharply reduced since the distribution is now centred on the mean of a component rather than just lying somewhere between the extreme values. Similarly, the strongest year-to-year variations are filtered by the averaging process for the highly volatile components, sharply decreasing the dependence on the choice of base year in some of the component indicators. Furthermore, several weighting methods for composite indicators-like the MGI-exist, all with their own pros and cons. Regardless which weighting method is used, weights are in essence value judgments. For maximum transparency, we have relied on equal weighting [29]. Next, we have tested the sensitivity of the weighting scheme at the domain level. With respect to the weights for the five domains tested in the sensitivity analysis, the country rankings are consistent for approximately half of the countries. The allocation of the weights must be evaluated with care according to its analytical rationale, globalization relevance, and implied value judgments.
Health Indicators
In order to link the extent that a country is globalized with the status of population health in a country, several indicators for mortality have been selected, based on the Table 1 Positive and negative health impacts of globalization: some examples ( [8,9] Positive health impacts Negative health impacts -Diffusion of knowledge and technologies, improving health services; -Spread of infectious diseases due to increased movement of goods and people; -Diffusion of knowledge and technologies, improving food and water availability (e.g. irrigation technology); -Spread of unhealthy lifestyles due to, for example, cultural globalization, global trade and marketing; -Improvements in health care or sanitation due to economic development; -Brain drain in the health sector; -Global governance efforts, such as WHO's Framework Convention on Tobacco Control (WHO FCTC) and WHO's Global Outbreak Alert and Response Network; -Health risks due to global environmental change; -Increased access to affordable food supplies due to free trade.
-Decreased government spending on public services due to, for example, Structural Adjustment Programmes (SAPs); -Inequitable access to food supplies due to asymmetries in the global market. • Under-five mortality rate (probability of dying by age 5 per 1000 live births, both sexes): "the probability of a child born in a specific year or period dying before reaching the age of five, if subject to age-specific mortality rates of that period [31]".
• Adult mortality rate (probability of dying between 15 to 60 years per 1000 population, both sexes): "probability that a 15-year-old person will die before reaching his/her 60th birthday [31]".
According to the World Health Organization [31], indicators representing such mortality rates provide an accurate view of overall population health. The infant mortality rate and under-five mortality rate are principal indicators used to assess child health, and overall health and development in a country [32]. The WHO uses these indicators to measure progress on the Millennium Development Goals [31][32][33]. Low levels of life expectancy are inherently related to higher levels of child mortality. The adult mortality rate has become a widely used indicator for assessing the overall patterns of mortality in a country's population. The growing importance of this indicator is particularly stressed by the increasing disease burden from non-communicable diseases among adults (economically productive age categories) by ageing trends and health transitions [32]. The selected mortality indicators are available for all 117 countries in the MGI-indicator dataset.
Confounding factors
The relationship between the process of globalization (MGI) and the selected health outcomes cannot be isolated from other, possibly related developments. Therefore, possible confounding factors in the MGI-health relationship have been identified based on existing literature: income level and income growth (often represented by GDP per capita; GNP per capita; or Growth of GDP per capita) [7,34,35]; water quality [35]; Health expenditures and financing [34,35]; Smoking [34] secondary education [35]; and availability of public health resources (such as vaccinations) [35]. Table 3 provides an overview of the selected indicators associated with these confounding factors (including sample size, year and source).
Many other possible confounders have been considered for this analysis, but could not be included for different reasons. A large group of confounders have been excluded based on lack of data availability for the sampled countries, and/or a lack of current data. i Other variables could not be selected for this study because when tested not all criteria for confounding could be met. ii
Statistical methods and analysis
Correlation analysis has been conducted as a first step, in order to obtain the crude associations between the indicators used. For this we applied the non-parametric Spearman's correlation analyses, as not all variables showed a normal distribution [37] iii . Next, least squares (LS) simple linear regression analysis has been performed to gain an insight in the possible associations between the MGI and the mortality indicators, as well as the strength of these associations for each of the underlying MGI Domains (all without controlling for possible confounding). Subsequently, LS multiple linear regression analysis has been performed, in order to assesses if and to what extent the MGI can explain a proportion of the variance in the dependent variables 'infant mortality rate'; 'under-five mortality rate'; and 'adult mortality rate'; whilst controlling for the selected confounding factors [38]. It has been tested whether the models meet the regression model assumptions and are not subject to outliers [38][39][40] iv . Based on the results, a transformation of the mortality indicators into a natural logarithm (Ln) was required for a proper regression analyses.
To construct the final multiple regression models, backward step-wise linear regression has been used. For this process, the correlation coefficients between the dependent/confounding variables and the independent variables have been used as a criterion to prioritize the different confounding variables for inclusion in the model (i.e. variables showing a higher correlation coefficient with the independent variable have precedence over variables showing lower correlation coefficients). Moreover, the correlation coefficients have been used to identify possible cases of multicollinearity between the dependent and confounding variables. Here, the common threshold of not having a correlation coefficient higher than 0.80 has been applied [38]. When a possible case of multicollinearity has been detected, one of the two variables involved has not been included in the model, where the variable with the lower Spearman's correlation with the dependent variable has been excluded over the other variable. During the step-wise backward linear regression, the R-square and the F-statistic (as a test for the global usefulness of the model) have been used to determine the * Other GDP measures (including GDP per capita (PPP)) have not been included for the following reasons: a) the GDP measure shows multicollinearity with the other confounders and/or b) the GDP measure when tested does not function as a confounder in the MGI-health indicator relationship. ** Data for most recent year available in this range has been selected for each country. It should be noted that all compiled datasets largely exist of data stemming from the latest years that the set covers, and only few cases from earlier years have been added to meet the sampled countries in the MGI dataset.
Confounders that did not have any or much current data available for the sampled countries did not qualify for a compilation of data over several years, and were therefore not included in this study.
final model [38,39] v . All analyses have been performed in SPSS 15.0.
Results Spearman correlation
To give an indication of the crude associations between the MGI, and the MGI Domains, with the health indicators, the Spearman's correlations are given in Table 4.
The results show that the MGI has a statistically significant vi negative correlation (at α = 0.01) with all selected mortality indicators (-0.798, -0.803, -0.717, respectively). When taking a closer look at the individual domains of the MGI, the results in Table 4 reveal that all underlying domains have a significant negative correlation (at α = 0.01) with the mortality indicators. The correlations between the mortality rates and the sociocultural, and technological domains are particularly strong.
Results simple linear regression models
Tables 5 and 6 and Figure 2 show the simple linear regression outcomes of the mortality indicators (Ln transformed) with the MGI and the MGI Domains, respectively, as dependent variables; without correction for confounding factors The associations between the MGI/MGI Domains and the mortality indicators suggested by the Spearman's correlation outcomes logically correspond with the associations that can be ascertained from these univariate regression analyses. All results are significant (at α = 0.01) in the expected direction. From the R-squares, it follows that the variation in the MGI partly explains the variation in all mortality indicators. Similar to the correlation results, the R-squares in Table 6 indicate that the 'social & cultural' and the 'technical' domains of the MGI show a stronger association with the mortality indicators. Overall, it can be observed that the R-squares are higher in all instances, in comparison to the results of the simple linear regression analyses in Table 5. This indicates that the models for all three mortality indicators have been improved in explanatory power by adding the confounding factors.
Results multiple regression models
For all three models, the confounders 'Total expenditure on health as a percentage of gross domestic has not been included in any of the models due to multicollinearity with 'Immunization, measles (% of children 12-23 months) 2008'. (Table 7) shows significant t-values for all variables included. The coefficients for the MGI and the confounders all show the expected signs/direction. In addition, a high R-square (0.880) and a significant and high F-statistic is reached. The decrease in regression coefficients for the MGI compared to the results of the simple linear regression analysis indicates that the confounders play a significant role in the posed relationship. When controlling for the confounding factors, however, the MGI still remains significantly associated with the Ln Infant mortality rate.
Multiple regression model for Under-five mortality rate
For the final model of Ln Under-five mortality rate (Table 8) The results from the final model (Table 8) show that all resulting coefficients display the expected signs, and all t-values are significant at the α = 0.01 level. The Rsquare is high (0.885) and the F-statistic is high and significant. The significance of the confounding factors indicates that these factors do play a relevant role in the relationship between the MGI and the Ln Under-five mortality rate. Hence, the higher MGI coefficient found for the simple linear regression might have been an overestimation of the association between the MGI and the Ln Under-five mortality rate, and this association has now been corrected for relevant confounding factors. When controlling for the confounding factors, however, the MGI still remains significantly associated with the Ln Infant mortality rate. The results from the final model (Table 9) show that all coefficients have the expected signs, and the t-values are significant (at α = 0.01). The R-square is relative high (0.612) and the F-statistic is significant. The decrease in regression coefficients for the MGI compared to the results of the simple linear regression analysis indicates that 'Improved sanitation facilities (% of population with access) 2000-2006' plays a significant role in the posed relationship. When controlling for this confounding factor, however, the MGI still remains significantly associated with the Ln Infant mortality rate.
Discussion
As this research focuses on indicators of mortality to highlight an important side of global health outcomes, it is interesting to look at some of the drivers directly related to mortality (or factors linking globalization and mortality) identified in the current body of research in this field. Martens [7] claims that increased income levels can result in a decrease in mortality rates, which ultimately impacts life expectancy rates positively. Burns, Kentor, and Jorgenson [35] focus on infant mortality and discuss a country's level of internal development and the related dependencies on the world economy (affecting domestic institutional structures) as a main driver. However, the level of a country's development and the resulting impact on infant mortality is not fully uncovered. Other factors they found to be related to infant mortality are the macro level effect of export commodity concentration, GDP per capita, health expenditures per capita, secondary education, and organic water pollution. They identified several mediating factors between global dependence and infant mortality: quality of water and health care, level of internal development such as GNP per capita, the role of ecology (pollution and misuse of land) as well as public health factors (lack of resources for public health can be seen with indicators such as scarcity of inoculation to childhood diseases, and the lack of trained medical personnel for pre-and post-natal care and for assistance with birth process itself) [35] Cornia et al. [34] associate globalization mainly with economic changes, such as economy policy, protectionism, costs of technological transfer, privatization, market liberalization, trade and financial liberalization. Looking at the slow progress in infant mortality rates over the past decades, the authors suggest that many factors can be responsible for these slow improvements such as slow growth of household incomes, greater income volatility, shifts in health financing, amongst others. In this study, the effects of globalization are captured by comparing the timeframe of 1980-2000 (the era of globalization) with other timeframes, indicating changes in the following indicators: growth of GDP per capita, economic stability, income inequality, inflation and prices of basic goods, taxation and public health expenditure and health financing, migration and family arrangements, technical progress in health, smoking drinking and obesity, and random shocks [34].
The results of our analysis (Spearman's correlations, and simple and multiple linear regression analyses) indicate that the infant morality rate, under-five mortality rate and adult mortality rate all show a negative association with the process of globalization (as measured by the MGI). Specifically, technological globalization and socio-cultural globalization are shown to have strong associations with the selected health indicators. The multivariate analyses show that different confounders have been found to be significant in the three final models. Specifically, for Ln Infant morality rate confounders accounting for primary and secondary education and public health expenditures have been found to be significant. For the Ln Under-five mortality rate, next to the confounders for primary and secondary education, smoking prevalence under females have shown to be significant in the final model. Lastly, for the model of Ln Adult mortality rate, only a confounder on access to improved sanitation facilities has been significant. These factors, thus can possibly function as confounders in the relationships between the respective mortality rates with the MGI. However, the confounders in the final models could also be important mediating/causal factors in the association between the mortality rates and the MGI. Either way, in all multivariate models, the association between globalization and the mortality indicators remains significant after controlling for confounding factors.
Given the limited existing quantitative information on the association between globalization and health, the results might provide a crude initial indication of the potential advantageous effect of globalization on health.
In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association between the MGI and health, as simple evidence that globalization is mostly good for our health. Important to note is that all indicators and data are on the country level, without a specific spatial dimension. Globalization interacts with health at levels that make measurement difficult, e.g., trans-border environmental issues, cultural transformations and a socalled 'global consciousness'. For example, the data do not show us that the most globalized countries might have lower mortality rates because they have exported their unhealthy pollution and other externalities of the production of goods and services they enjoy (and which contribute to their health) to people and environments in other parts of the world. Hence, some of the winners might be benefiting from their high levels of globalization at the expense of others. Importantly, it should also be noted that he MGI represents actual levels of globalization across different domains, rather then the mere implementation of neoliberal policies.
Conclusion
In this paper, we consider the impact of the recent process of globalization on the health of populations. Looking at the results, globalization can be characterised as both more complicated and more surprising than was anticipated. One clear lesson can be learned from the many global assessments that have been produced over the past decades: dogmatic predictions regarding the earth's future are unreliable, ill-founded and misleading, and can be politically counterproductive. So, this analysis is beset with the uncertainties and assumptions that apply to any global statistical indicator analysis [41]. For example, if consumerism and global economic processes do have polluting and other unhealthy negative sideeffects for some, it needs to be asked which direction these dynamics need to take for sustainable health for all. Furthermore, this analysis is based on 'present day data'. As the globalizing processes intensify over time, the indirect impacts of human-induced disruption of global biogeochemical cycles and global climate change, and their impacts on human health, may start to become more apparent [42,43]. Borghesi and Vecelli [44] also state that the available empirical evidence suggests that the current process of globalization is unsustainable in the long run unless we introduce new institutions and policies able to govern it, a similar claim being made by Tisdall [45] and Watanabe [46] looking at economic globalization only. Schrecker et al. [47] reject furthermore the presumption that globalization will yield health benefits as a result of its contribution to rapid economic growth and associated reduction in poverty.
Hence, for future research we hypothesize that a country performance might be classified into four categories (adopted from [48]: vicious cycle (low globalisation, high mortality), globalisation-lopsided (high globalisation, high mortality), health-lopsided (low globalisation, low mortality) or virtuous cycle (high globalisation, low mortality).
We hypothesize that a country performance might be classified into four categories (adopted from [48]: vicious cycle (low globalisation, high mortality), globalisation-lopsided (high globalisation, high mortality), health-lopsided (low globalisation, low mortality) or virtuous cycle (high globalisation, low mortality). In the vicious cycle, any efforts to properly integrate into the global process are yet unsuccessful, but might even result in (temporary) adverse health effects (e.g. Ghana). Globalization-lopsided may happen when integration into the globalization process has not yet resulted in major health benefits, or might have even resulted in increasing health problems (e.g. Egypt). Health-lopsided might happen, when health improvements occur that are not related to any globalization benefits, but due to other domestic polices or developments (e.g. Peru). In a virtuous cycle, countries might have benefited from their integration into the globalization process, while averting any associated health risks. It is important to note, however, that for some countries the virtuous cycle could be the result of bias due to causal sequence (i.e. did all the major improvement in health already occurred prior to the modern-day globalization process?) (e.g. the Netherlands).
Example countries: • Vicious cycle (low globalization, high mortality): Since the 1980s, Ghana has implemented the macroeconomic policies prescriptions and Structural Adjustment Programs of the Bretton Woods Institutions (BWI), but with limited success. The commitment to privatisation and cuts in public spending have, however, resulted in users fees in health care and, subsequently, to restricted access for the poor, especially in rural areas [49]. In the Upper Volta region, health care use is believed to have decreased by 50 percent [50]. An additional health problem is, for example, the out-migration of doctors and nurses [51]. Ghana has experienced an increase in adult mortality rate from 272 per 1000 population in 1990 to 331 per 1000 population in 2006 [30].
• Health-lopsided (low globalization, low mortality): Peru has experienced important health improvements in the past decades (although the gap between rich and poor remains a problem) [52] and in 1990, Peru's adult mortality rate had already declined to 178 per 1000 population [30]. Hence, many of Peru's health improvements occurred before President Fujimori started to push for integration into the global market via extensive macro-economic policies in the early 1990s. There has been macroeconomic growth since, but limited increase in development. In 2006, adult mortality rate had declined further to 136 per 1000 population [30], but Peruvians have a lower health status compared to the continental average and some are concerned about the possible adverse globalization impacts, such as increasing inequality and decreasing labor standards [53,54].
• Globalization-lopsided (high globalization, high mortality): Since the mid-1970s, Egypt has been going through a process of increasing integration into the world economy. Even though Egypt implemented further macro-economic policies and structural adjustment programs in the 1980s and 1990s, the associated impacts on economic growth and development have been disappointing and uneven [55], for example resulting in increasing unemployment. Egypt also faced many health challenges such as low formal health coverage and poor quality of many health facilities. This resulted in an increased need for health reform, increasing public health expenditure and pro-poor health care [55,56]. Although adult mortality rate has declined over recent years, it is still relatively high at 186 per 1000 population in 2006 [30].
• Virtuous cycle (high globalization, low mortality): In the Netherlands, mortality started to decrease progressively in the late nineteenth century. Although this decline happened decades before the start of modern-day globalization, the diffusion of knowledge about, for example, sanitation probably played an important role besides improved overall living conditions [8]. Adult mortality rate was 92 per 1000 population in 1990, declining further to 70 per 1000 population in 2006 [30].
The important issue for policy purposes, of course, is how a country may move towards the virtuous cycle and several important research questions can be identified. How have countries changed their location over time and due to which underlying mechanisms? If countries find themselves in a viscous cycle, should they first focus on enhancing their health status or on enhancing their integration into the globalization process? Looking at the health-lopsided countries and the globalisation lop-sided countries, which have a higher chance of reaching a virtuous circle and which are most at risk from shifting to a vicious circle? How can health-lopsided countries make sure that their health status is not compromised by any efforts to improve their integration in the globalization process? How can globalisation-lopsided countries increase their health benefits of globalisation? And finally, will the countries that now experience a virtuous cycle also persist to remain in this category in the future?
What is clear is that the increasing complexity of our global society means that sustainable health cannot be addressed from a single perspective, country, or scientific discipline. Changes in human health in the context of globalization are far more complex than health issues that had to be tackled in the past. As addressed by others (e.g., Borgesi and Vecelli [44]), it is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development [57]. To this end, extensive empirical work is needed to identify the relevant causal mechanisms underlying the influence of globalization on human health.
Appendix
i The variables excluded from the analysis based on these reasons are: from WHOSIS [30,31]: adult literacy rate (%); adolescent fertility rate (%); antenatal care coverage -at least four visits (%); births attended by skilled health personnel (%); prevalence of HIV among adults aged ≥15 years (per 100 000 population); population with sustainable access to improved drinking water sources (%) total; population with sustainable access to sanitation (%) total; prevalence of current tobacco use amongst adolescents (13-15 years (%) both sexes; prevalence of current tobacco use amongst adults (≥15 years) (%) both sexes; deaths amongst children under 5 years of age due to malaria (%); deaths due to HIV/Aids (per 100 000 population per year). Confounders assessed and excluded for the same reasons from the World Data-Bank [36] include: malnutrition prevalence, weight for age (% of children under 5); literacy rate adult female (% of females ages 15 and above); literacy rate adult male (% of males ages 15 and above); total enrolment, primary, female (% net); total enrolment, primary, male (% net); pregnant women receiving prenatal care (%); and births attended by skilled health staff (% of total).
ii Variables that did not satisfy the criteria of functioning as a confounder on the MGI-health indicator relationships are: 'Smoking prevalence, males (% of adults) 2006'; and 'Prevalence of HIV, total (% of population ages 15-49), 2007' [36] iii The following tests have been used to assess whether the indicators used display a normal distribution: Frequency histograms (for a graphical assessment of normality of distribution); P-P plots and Q-Q plots (have been used as a complementary graphical assessment tool for the normality of the distribution of the variable, thus in addition to the frequency histograms); Boxplots (to graphically check for outliers and skewness); the Shapiro-Wilk's W-test (as a formal test for normality has been used [37]. However, results of the W-test have been treated with care and placed within the context of the insights gained from all the other normality tests performed); descriptive statistics have been used to numerically assess skewness and kurtosis (criterion used for skewness: the skewness-statistic must lie between +2 and -2; criterion used for kurtosis: the kurtosis-statistic must lie between +2 and -2) [38].
iv All assumptions of least squares regression analysis have been checked and could be met by the models.
The assumption of linearity has been checked with scatterplots and linear curve estimation. The normality of the probability distribution of the error terms of prediction have been tested by generating frequency histograms of the standardized residuals. To test for homoscedasticity, the standardized residuals and the standardized predicted values have been plotted in a scatterplot to observe a random pattern. For the assumption of mean independence, residual statistics and scatterplots of the residual against the predicted values have been used to verify that the mean of the residuals would be approximately zero. In addition, all models have been checked for multivariate outliers by generating Cook's Distances [58]. When the Cook's Distance is higher than 1.0, a case is considered an outlier and is deleted from the analysis.
v Note: The step-wise backward linear regression analyses have been performed manually.
vi When reporting on statistical results, the term 'significance' refers to 'statistical significance'. vii | 2014-10-01T00:00:00.000Z | 2010-09-17T00:00:00.000 | {
"year": 2010,
"sha1": "1f6f6b75896cdcf4846035ac74f2eab057497cf9",
"oa_license": "CCBY",
"oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/1744-8603-6-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f6f6b75896cdcf4846035ac74f2eab057497cf9",
"s2fieldsofstudy": [
"Economics",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Economics",
"Medicine"
]
} |
5218324 | pes2o/s2orc | v3-fos-license | A Study of Epiphyses in the Young Prepubescent Knee Using Magnetic Resonance Imaging
Background: Questions have been raised concerning the safety of intra-articular anterior cruciate ligament (ACL) reconstruction in prepubescent children aged <7 years. However, normal values for the width of the lateral femoral condylar epiphysis and height of the tibial epiphysis have yet to be established through the use of magnetic resonance imaging (MRI). Purpose: To determine normal values for the width of the lateral femoral condylar epiphysis and height of the tibial epiphysis at the knee in prepubescent children aged <7 years by use of MRI and to compare this age group with an older cohort of prepubescent children aged <10 years. Study Design: Cross-sectional study; Level of evidence, 3. Methods: An electronic search was conducted for pediatric knee MRI examinations at the authors’ institution from March 2003 to March 2013. The total and ossified lateral femoral condylar widths were determined on coronal proton density–weighted images. The total and ossified tibial epiphyseal heights were recorded on the sagittal T1-weighted image best containing the ACL footplate. The intraclass correlation coefficient (ICC) was calculated to determine interobserver agreement. Knees were stratified by age into 2 groups: children between the ages of 3 and 6 years (group 1) and children between the ages of 7 and 9 years (group 2). Each cohort was further stratified by sex. Results: Group 1 consisted of 10 children (mean age, 4.3 years) and group 2 consisted of 10 children (mean age, 8.5 years). There were a total of 20 knees. There was a statistically significant difference between groups 1 and 2 for the ossified lateral femoral condylar width where femoral tunnel location would be expected (20.00 ± 4.20 vs 26.27 ± 4.12 mm, respectively; P = .0035) and for total lateral femoral condylar width (25.57 ± 3.47 vs 29.43 ± 4.04 mm, respectively; P = .0339). No difference was found for total tibial epiphyseal height between the 2 groups. However, there was a difference between groups 1 and 2 for ossified tibial epiphyseal height (13.20 ± 1.63 vs 15.27 ± 0.94 mm, respectively; P = .0028). No difference was found for average ossified tibial epiphyseal height or ossified lateral femoral condylar width between boys and girls in the younger or older cohorts. The ICC was strong (>0.7) at femoral and tibial locations where tunnel placement would be expected. Conclusion: Prepubescent children <7 years old have smaller knee epiphyses than older prepubescent children, and on average, present with an osseous bone stock of 20 mm for lateral femoral condylar width and 13 mm for tibial epiphyseal height. Study results suggest that children aged <7 years possess enough osseous bone stock at the lateral femoral condyle to support transepiphyseal ACL reconstruction. However, future studies will be necessary to determine the safety and effectiveness of this procedure in children aged <7 years. Clinical Relevance: ACL tears in children are more frequently being recognized by the orthopaedic community. The trend toward increasing participation in competitive and recreational sports has contributed to this phenomenon. Young patients with complete ACL tears and open growth plates often provide a management dilemma for surgeons who wish to perform reconstructive surgery.
option for complete ACL tear remains controversial. 11,21 Trends of increased participation in sports and recreational endeavors by children are contributing factors. Since skeletally immature patients may not reliably limit their physical activity following injury, 1,16,29 early surgical intervention to address complete ACL tear and to restore knee stability is gaining acceptance as a strategy for the prevention of chondral and meniscal abnormalities associated with nonoperative management. 1,2,13,17,23,28,29,34 Surgical techniques that spare knee physes are important for young populations, since the risk of iatrogenic growth disturbance is most concerning for children with at least 5 cm of remaining lower extremity growth potential. 4,16 Potential poor outcomes for children with ACL reconstructions that cross growth plates are known to include angular deformity and leg length discrepancies related to premature growth plate closure or lower extremity overgrowth. 6,7,10,14,22 Both intra-articular and extraarticular physeal-sparing ACL procedures exist to avoid physeal injury during reconstructive surgery. Intraarticular techniques have been championed recently as they have been shown to better restore normal knee kinematics. 2,15,24 However, intra-articular physeal-sparing ACL reconstruction procedures are not without risk, and the potential for growth plate injury has been described. 5,25 Questions about which children are too young to receive intra-articular physeal-sparing surgery have been raised in the literature. 3,30 Some proponents have suggested that this technique may be safe in children as young as 5 years, although no definitive evidence exists to support this procedure as an established practice. 3 Concerns about the safety of intra-articular physeal-sparing ACL reconstruction are relevant since complete ACL tears in children younger than 7 years have been reported. 8,31,35 Congenital absence of the ACL also has been described, and symptomatic knees in this population present a management dilemma similar to complete ACL tear. 19,20,33 As the orthopaedic community reduces the role of traditional nonoperative management in favor of earlier surgical intervention, open questions remain about what strategies constitute best management practices for young children. Variables relevant to surgical tunnel placement that are entirely in knee epiphyses have been described in prepubescent children and adolescents. These include the height of the tibial epiphysis and the width of the lateral femoral condylar epiphysis. 9,18 However, these parameters have not yet been established for children of early primary school age and younger (<7 years). Determining normal values for the width of the lateral femoral condylar epiphysis and the height of the tibial epiphysis for children younger than 7 years can provide a starting point for addressing the potential safety of intraarticular physeal-sparing ACL surgery in young prepubescent knees. Surgeons can use these values as a reference to aid in decision making about management during the workup of complete ACL tear or congenitally absent ACLs in young children.
The purposes of this study were to determine normal values for width of the lateral femoral condylar epiphysis and height of the tibial epiphysis in prepubescent children younger than 7 years using magnetic resonance imaging (MRI) of the knee and to compare this age group with an older cohort of prepubescent children younger than 10 years. We hypothesized that the younger cohort would have less ossified bone stock and smaller total size for the average width of the lateral femoral condylar epiphysis and height of the tibial epiphysis when compared with the older cohort.
MATERIALS AND METHODS
The study was approved by the authors' institutional review board and complied with Health Insurance Portability and Accountability Act guidelines. The requirement for patient informed consent was waived for this retrospective study.
The study population consisted of children between the ages of 3 and 9 years. Patients undergoing an MRI of the knee between March 2003 and March 2013 were identified through an electronic search of our departmental picture archiving and communication system (PACS). Inclusion criteria were as follows: (1) an intact ACL or a mild sprain of the ACL that did not preclude clear delineation of its normal landmarks and morphology, (2) normal proximal tibial epiphysis morphology, and (3) normal distal femoral epiphysis morphology. A total of 27 knees were identified during the search. Seven knees were excluded for the following reasons: Blount disease (n ¼ 2), chondroblastoma at the lateral femoral condyle (n ¼ 1), and wide field of view imaging not typically performed during routine MR knee examinations (n ¼ 4). Therefore, 20 knees (12 male and 8 female; mean age, 6.4 years; range, 3-9 years) were included in the study. Children within the age range of this study have been described as high risk for iatrogenic physeal injury following transphyseal ACL reconstruction surgery. 4 We stratified our study population further into 2 cohorts. Group 1 consisted of younger children of toddler, preschool, and early primary school age (range, 3-6 years), and group 2 consisted of an older primary school age group (range, 7-9 years).
Most MRI examinations were performed at 1.5 T (Magnetom Avanto or Espree; Siemens, Erlangen, Germany) or at 3.0 T (Magnetom Trio; Siemens, Erlangen, Germany). Two MRI examinations, however, were performed at 1.5 T (Eclipse; Philips Medical System, Best, the Netherlands). MRI examinations included standard 2-dimensional (2D) coronal turbo spin-echo (TSE) proton density (PD)-weighted (or T1-weighted) and sagittal SE T1-weighted sequences, with a slice thickness of either 3 or 3.5 mm. A single examination had a slice thickness of 4 mm for coronal and sagittal sequences, and a single examination had a slice thickness of 2.5 mm for the sagittal sequence. For a single examination, a standard 2D coronal short tau inversion recovery (STIR)weighted sequence was used because of the absence of a coronal PD-or T1-weighted sequence. Each knee examination contained a sagittal TSE T2-weighted fat saturation or STIR sequence that was available for comparison.
Two musculoskeletal radiologists and 1 musculoskeletal radiology fellow independently and retrospectively reviewed the images on a PACS workstation. The height of the tibial epiphysis was evaluated from the sagittal T1-weighted image best containing the ACL and its tibial footplate ( Figure 1). 9 Two separate measurements were obtained: (1) The height of the tibial epiphysis was determined as the vertical distance from the cartilaginous attachment site at the midpoint of the ACL footplate to the proximal tibial epiphysis-physis interface and (2) the height of the ossified portion of the tibial epiphysis was also measured as the vertical distance from the superior margin of the ossified tibial epiphysis in line with the midpoint of the ACL footplate to the proximal tibial epiphysis-physis interface. In this study, we included knees that had intact ACLs or mild sprains, in order that the location of the tibial footplates could be readily identified. We did so to provide a reliable location for reproducible measurement and to provide a height of the tibial epiphysis that best correlated with the location of the native ACL footprint.
The width of the lateral femoral condylar epiphysis was determined using 2 separate techniques. 9 The first method (method 1) included finding the sagittal image best containing the ACL and the MR equivalent of the Blumensaat line; then, the localizer function at the PACS workstation was utilized to select the coronal PD image that corresponded to one-fourth the distance along the Blumensaat line in a posterior to anterior direction ( Figure 2A). The distance from the footplate of the ACL to the lateral cartilaginous margin of the lateral femoral condylar epiphysis was then measured as a horizontal line ( Figure 3A). Lastly, the distance from the footplate of the ACL to the lateral margin of the ossified portion of the lateral femoral condylar epiphysis was obtained ( Figure 3B). The second method (method 2) involved identifying the anterior-most of 3 consecutive coronal slices through the lateral femoral condylar epiphysis on the sagittal image ( Figure 2B), then measuring the total width and ossified width, as performed with the first method ( Figure 3).
Statistical analysis was performed by calculating the mean tibial epiphyseal height (total height and ossified height) and lateral femoral condylar width (total width and ossified width) for each individual knee from the measurements provided by each observer. Group 1 and group 2 mean measurements were compared using the unpaired t test. Each cohort was further stratified by sex and also compared by use of the unpaired t test. In addition, the mean age of group 1 and group 2, and the mean age by
RESULTS
Group 1 (n ¼ 10) consisted of a younger cohort of knees with a mean age of 4.3 years (standard deviation [SD], ±1.2 years; range, 3-6 years). Group 2 (n ¼ 10) consisted of an older cohort of knees with a mean age of 8.5 years (SD, ±0.9 years; range, 7-9 years). The age difference for group 1 versus group 2 was statistically significant (P < .0001). The mean total tibial epiphyseal height showed no difference. The ossified tibial epiphyseal height, however, did significantly differ between cohorts (P < .003, Table 1). The younger cohort of knees had a mean ossified tibial height of 13.2 mm versus 15.3 mm for the older children. Significant differences also were identified between group 1 and group 2 with regard to the mean total width of the lateral femoral condylar epiphysis by the first method and the mean width of the ossified portion of the epiphysis by the first and second methods ( Table 1). The largest mean measurement by either method for the total width of the lateral femoral condyle was 25.6 and 29.4 mm for the younger and older groups, respectively. The ossified portion of the lateral femoral condyle for each group was smaller, with the mean for group 1 measuring 20.0 mm as compared with 26.3 mm for group 2. There was no significant difference in ossified tibial epiphyseal height or ossified lateral femoral condylar width between male and female patients in group 1 ( Table 2). Girls had less total tibial epiphyseal height than boys in group 1, measuring 14.7 versus 16.3 mm on average, respectively (P ¼ .0191). Girls also had less total lateral femoral condylar width than boys in group 1, measuring 23.3 versus 27.9 mm on average, respectively (P ¼ .0247). There was no significant difference between total or ossified tibial epiphyseal height and lateral femoral condylar width between male and female patients in group 2 (Table 3). A summary of age, sex, and mean measurements for each knee is presented in Table 4. There was no significant difference in age between boys and girls in group 1 or group 2, respectively. The general measure of agreement among radiologists for each measurement is provided in Table 5. During this study, strong interobserver agreement was found for each parameter except for determination of the total height of the tibial epiphysis, where moderate-to-strong agreement was present.
DISCUSSION
Early surgical intervention is gaining acceptance as a treatment strategy for complete ACL tear in the skeletally immature knee. 1,2,13,17,23,28,29,34 However, best management practices remain a subject of controversy in the orthopaedic community. 11,21 Questions persist about which knees are too immature for intra-articular ACL reconstruction. 3,30 Concerns about growth plate disturbance persist, since iatrogenic complications may result in the need for major limb reconstruction. 20 As a consequence, many surgeons still prefer initial nonoperative management for complete ACL tear in the young prepubescent knee despite the risk of poor outcomes. 21 Some authors have suggested ACL reconstruction should be avoided entirely in prepubescent children who are shorter than older siblings and parents by at least 10 to 15 cm. 27 The incidence of skeletally immature knees requiring ACL reconstruction is poorly understood for prepubescent children younger than 7 years. Complete ACL tears in this age group are thought to be uncommon, and a few cases are reported in the literature. Waldrop and Broussard 35 reported a case of a 3-year-old girl who suffered a complete ACL tear at the midsubstance after a fall. The ligament was debrided but ACL reconstruction was not performed because of concerns about epiphyseal growth arrest. Schaefer et al 31 reported a case of a 4-year-old boy who sustained a complete ACL tear at the midsubstance after falling off a toboggan. The ACL tear was treated with primary repair, but the ACL was completely absent at follow-up arthroscopy 5 years later. Corso and Whipple 8 described a case of a 3-year-old boy who presented without a known history of trauma and who was shown to have an unusual avulsion of the ACL that peeled off the cartilage anlage of the femoral epiphysis at arthroscopy. Treatment involved debridement and reapproximation of the ligament with the femoral attachment site, followed by immobilization with the knee in extension.
Symptomatic congenital absence of the ACL is another source of knee instability for young children. Case reports have described ACL reconstruction in children younger than 7 years. Kocher et al 20 presented a case of a 3-yearold boy with symptomatic knee instability from congenital absence of the ACL and associated proximal femoral focal deficiency. ACL reconstruction was performed with an iliotibial band graft as a combined intra-articular and extra- 19 reported a case of a 3-year-old boy who presented with an isolated congenital aplasia of the ACL. A Clocheville ligamentoplasty was performed at age 5 years because of instability at the knee. The lower age limit for ACL reconstructive surgery in the skeletally immature knee has not been established. Some authors have suggested that this procedure should be avoided in prepubescent children with substantial remaining growth potential. 27 However, in a survey of the Herodicus Society and The ACL Study Group in 2002, respondents claimed to have performed ACL reconstructions in patients as young as 2 years, although details about indications or techniques were not provided. 21 Some authors have speculated that intra-articular physeal-sparing ACL reconstruction is theoretically possible in prepubescent children younger than 7 years. 3 However, intra-articular ACL reconstructions that do not cross physes are not entirely without risk for growth plate injury. Lawrence et al 25 reported the case of an adolescent who suffered premature closure of the lateral femoral physis following revision ACL reconstruction despite the belief that no direct transgression to the physeal plate had occurred. Theoretical mechanisms for injury to the femoral physis, in addition to direct transphyseal drilling, were speculated to include indirect thermal and pressure insults that occurred at the time of epiphyseal drilling. Proximity of the femoral physis to the femoral origin of the ACL is another important variable relevant to the safety of ACL reconstruction techniques in children, with ramifications for intra-articular or combined extra-articular and intraarticular procedures. 5,26 Iatrogenic injury to the gastrocnemius tendon, lateral collateral ligament, and popliteus tendon have been described as other potential risks during transepiphyseal ACL reconstruction. 12 For young, prepubescent children younger than 7 years, establishing the expected average normal size of epiphyses at the knee is important since there is less margin for error during surgery as compared with older children and adolescents. Young prepubescent children have smaller bones. In addition, children younger than 7 years have a higher percentage of nonossified cartilage anlage at the knee compared with older children. Davis et al 9 used MRI to determine normal references for the average tibial epiphyseal height and lateral femoral condylar width in children and adolescents aged 7 to 16 years. We are unaware of any similar study in the literature that evaluates tibial epiphyseal height and lateral femoral condylar width in children younger than 7 years on MRI. Our study divided knees into 2 cohorts: (1) children aged 3 to 6 years and (2) children aged 7 to 9 years. The difference between the mean ages of the younger and older cohorts was statistically significant, with a P value <.0001.
With regard to the height of the ossified portion of the tibial epiphysis, in our study, a significant difference existed between the younger and older cohort of knees, measuring 13.2 and 15.3 mm, respectively (P ¼ .0028). Therefore, we conclude that prepubescent children (<7 years) have less osseous bone stock in terms of tibial height than their older counterparts (7 years). There was no difference between the 2 groups with regard to the average total tibial epiphyseal height (cartilage anlage and ossified portion). Interestingly, where interobserver agreement was strong for osseous tibial epiphyseal height measurement, interobserver agreement was only moderate to strong for total tibial epiphyseal height measurements. This most likely reflects the more difficult task of differentiating the superior margin of the thin tibial epiphyseal cartilage anlage from the insertion site of the distal ACL, especially in the older cohort of knees. When comparing boys and girls in the same cohort, no difference was found for the older group of knees. In the younger cohort, girls demonstrated a 1.6-mm shorter total tibial epiphyseal height on average than boys (P ¼ .0191), but there was no sex difference for the ossified tibial epiphyseal height.
Significant differences also existed for the width of the ossified bone stock of the lateral femoral condylar epiphysis between the 2 groups. The maximum widths obtained in the study were acquired by the first method (method 1). The average width of the younger cohort was smaller than the older cohort by 6.2 mm (P ¼ .0035). Therefore, in a similar fashion to tibial epiphyseal height, we conclude that prepubescent children (<7 years) of toddler, preschool, and early primary school age have less expected osseous bone stock in terms of lateral femoral condylar epiphyseal width than older children (7 years). Our study found that the expected average normal ossified portion of lateral femoral condylar width for children younger than 7 years was 20.0 mm. For the total lateral femoral condylar epiphyseal width (cartilage anlage and ossified portion), the maximum average widths were also obtained using the first method (method 1) for both groups. The younger cohort was smaller in width than the older cohort by 3.8 mm (P ¼ .0339). The second method (method 2) produced smaller average values than the first method (method 1) for total and ossified lateral femoral condylar width measurements for both groups, reflecting that the first method coronal slice was more anterior than the second method coronal slice. 9 When comparing boys and girls in the same cohort, no difference was found for the older group of knees. In the younger cohort, girls demonstrated on average 4.6 mm less (P ¼ .0247) total lateral femoral condylar width than boys, but no sex difference was found for ossified lateral femoral condylar width.
Currently, the minimum length of femoral graft required for ACL reconstruction is unknown. 36 Although the average total lateral femoral condylar width for prepubescent children younger than 7 years was 25.6 mm, the ossified width was only 20.0 mm in our study. Lawrence et al 24 described a procedure that placed a 23-mm interference screw across the femoral epiphysis in an older subset of children (aged 10-12 years). Because of the limitations of the ossified width of the lateral femoral condyle, ACL reconstruction with a suspensory soft tissue graft mechanism may be a more feasible method in children younger than 7 years. A technique proposed originally for older prepubescent children and adolescents by Anderson 2 described a minimum length of 20 mm of quadruple hamstring tendon graft for the femoral tunnel. However, in a skeletally mature goat model, Zantop et al 36 showed no difference in knee kinematics or structural properties between knees with 15 versus 25 mm of soft tissue femoral graft at 12 weeks following intra-articular ACL reconstruction. This suggests that prepubescent children younger than 7 years possess enough ossified bone stock at the lateral femoral condyle to support a soft tissue graft ACL reconstruction. However, future studies will be necessary to assess long-term outcomes, safety, and applicability to young, skeletally immature human knees. One important variable that remains unknown is the healing response of the nonossified cartilage anlage at the lateral femoral condylar epiphysis following soft tissue graft placement. In the study by Zantop et al, 36 the goat model involved at least 25 mm of ossified femoral bone stock for each knee even though only 15 mm of soft tissue graft was placed in the femoral tunnel. Other relevant variables, with implications for the safety of ACL reconstruction in the skeletally immature knee, are the relationship between the length versus volume of soft tissue graft in the femoral tunnel and the ramifications of tunnel diameter. Our study determined the size of the tibial epiphysis as a vertical height, but this measurement may have shortcomings since the tibial tunnel is instead placed along an oblique course during transepiphyseal ACL reconstruction. 2,15,24 Kim et al 18 described an equation to convert the vertical height of the tibial epiphysis obtained on lateral knee radiographs into a longer oblique length to establish the maximum interference screw size that could be placed at surgery without violating the tibial physis. Future studies directly measuring the expected length of the tibial epiphysis along the oblique trajectory of the tibial tunnel on cross-sectional imaging may provide further insights for intra-articular ACL reconstruction techniques that involve a suspensory soft tissue graft method, since the length of graft in the tibial tunnel affects the amount of available graft that can be pulled into the femoral tunnel. 36 Magnetic resonance imaging is a particularly useful modality for the evaluation of the prepubescent knee in children younger than 7 years, since bone and cartilage anlage can be directly evaluated and studies are performed without ionizing radiation. MRI is a valuable resource for preoperative planning before intra-operative fluoroscopyguided physeal-sparing ACL reconstructive surgery with regard to (1) localization of physes, (2) evaluation of epiphyseal size, (3) estimation of osseous epiphyseal bone stock, and (4) planning of tunnel course and angle. Preoperative MRI planning coupled with the use of intraoperative 3D computed tomography during ACL reconstruction has the potential to optimize tunnel placement and decrease the risk of physeal injury. 24,25 The limitations of our study include the retrospective nature of the research. Knees were identified by a search of our hospital PACS without regard for patient history or physical examination findings. Also, no prospective correlation with skeletal age was performed. The small number of knees included in the study is a limitation. Our results may not be applicable to the general population because of the small sample size. Future studies with a larger number of knees may be required to validate the conclusions of this study. Another possible limitation is that the MRI examinations were performed on different scanners within our hospital system, however, all imaging protocols were similar in terms of sequences acquired and slice thickness.
CONCLUSION
We found differences for the width of the lateral femoral condylar epiphysis between our younger and older cohorts of skeletally immature knees. Prepubescent children younger than 7 years are more likely to have smaller lateral femoral condyles, and on average, present with an osseous bone stock of 20 mm for lateral femoral condylar width. Therefore, our study suggests that children younger than 7 years possess enough osseous bone stock at the lateral femoral condyle to support transepiphyseal ACL reconstruction based on prior research in an animal model. However, future human studies are necessary to determine the safety and effectiveness of this procedure in children younger than 7 years. The osseous tibial epiphyseal height in young prepubescent children is also less than in older children. On average, children younger than 7 years have 13 mm of osseous bone stock for the height of the tibial epiphysis. | 2016-05-17T10:24:20.013Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "d826f59bf60ef7a25a091559db95948f4ff97bdb",
"oa_license": "CCBYNCND",
"oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/2325967114530090",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d826f59bf60ef7a25a091559db95948f4ff97bdb",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263680409 | pes2o/s2orc | v3-fos-license | Differential Recognition of Clinically Relevant Sporothrix Species by Human Granulocytes
Sporotrichosis is a cutaneous mycosis that affects humans and animals and has a worldwide distribution. This infection is mainly caused by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa. Current research about anti-Sporothrix immunity has been mainly focused on S. schenckii and S. brasiliensis, using different types of human or animal immune cells. Granulocytes are a group of cells relevant for cytokine production, with the capacity for phagocytosis and the generation of neutrophil extracellular traps (NETs). Considering their importance, this study aimed to compare the capacity of human granulocytes to stimulate cytokines, uptake, and form NETs when interacting with different Sporothrix species. We found that conidia, germlings, and yeast-like cells from S. schenckii, S. brasiliensis, and S. globosa play an important role in the interaction with these immune cells, establishing morphology- and species-specific cytokine profiles. S. brasil-iensis tended to stimulate an anti-inflammatory cytokine profile, whilst the other two species had a proinflammatory one. S. globosa cells were the most phagocytosed cells, which occurred through a dectin-1-dependent mechanism, while the uptake of S. brasiliensis mainly occurred via TLR4 and CR3. Cell wall N-linked and O-linked glycans, along with β-1,3-glucan, played a significant role in the interaction of these Sporothrix species with human granulocytes. Finally, this study indicates that conidia and yeast-like cells are capable of inducing NETs, with the latter being a better stimulant. To the best of our knowledge, this is the first study that reports the cytokine profiles produced by human granulocytes interacting with Sporothrix cells.
Introduction
Sporotrichosis is a cutaneous and subcutaneous mycosis caused by members of the Sporothrix genus, which contain pathogenic and environmental species [1,2].The etiological agents are mostly prevalent in tropical and subtropical areas, with epidemic areas reported in Mexico, Peru, Brazil, South Africa, India, and China, among others [3,4].Different from other mycoses, sporotrichosis is not specific to human beings and can affect wild and domestic mammals, such as cats and dogs, which are sources of fungal agents; therefore, the disease is considered a zoonosis [5][6][7].Most sporotrichosis cases are benign lymphocutaneous infections that do not compromise the patient's life; however, the disseminated form that affects deep-seated organs is likely to occur in immunocompromised patients and is associated with high mortality rates [3,5].Fixed cutaneous infection is another frequent form of the disease, and in this case, the infection is auto-limited most likely because of an immune response that avoids the dissemination of the pathogen to other organs [8].
The most frequently isolated species from sporotrichosis cases are Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa [9].S. brasiliensis has recently stood out from the other two species because of the alarming epidemic outbreak of animal and human sporotrichosis that originally started in Brazil but recently expanded to other South American countries; in contrast, S. globosa is mostly isolated in China and other Asian countries [10].The three species are thermodimorphic and grow in the environment as mycelia, which produce conidia.These fungal morphologies are the ones that infect host tissues and, once adapted to body temperature, undergo dimorphism to yeast-like cells, a morphology associated with dissemination to tissues and organs [7].However, this classic division of morphology and stage of the infective cycle have been challenged by recent observations: germlings and hyphae have been observed in human and animal cases of sporotrichosis [11][12][13], and yeast-like cells can be transmitted from infected animals to healthy animals and human beings [14,15].
The study of anti-Sporothrix immunity has attracted attention in recent years in an attempt to understand the basis of the differences displayed by these species when interacting with host tissues and because it is an essential component in the search for immunomodulatory approaches helping in the treatment of sporotrichosis.Moreover, sporotrichosis is one of the few mycoses where an antibody-based immunity is capable of protecting the host from the infection [16,17].Thus far, both adaptive and innate immunity against Sporothrix have been studied, but the latter has been studied to a greater extent and mainly in terms of when immune effectors interact with S. schenckii or S. brasiliensis [18,19].Thus far, the interaction of complement, peripheral blood mononuclear cells (PBMCs), macrophages, dendritic cells, and NK cells with Sporothrix cells has been reported [20][21][22][23][24][25], but there is limited information about the contribution of granulocytes to anti-Sporothrix immunity.This group of cells is a relevant cytokine producer and can phagocyte and generate extracellular traps, actions that contribute to one of the first attempts to control pathogens by innate immunity cells [26].
It is known that human polymorphonuclear leukocytes can phagocyte S. schenckii yeast-like cells in the presence of complement [27], and this observation was further supported by histological analyses of human sporotrichosis cases [28].In a comparative study, S. schenckii yeast-like cells were more phagocytosed than conidia, but fungal viability was not significantly affected [29].Moreover, soluble extracellular components of S. schenckii cultures were capable of stimulating more reactive oxygen species in human granulocytes than Candida albicans preparations, suggesting a more potent proinflammatory response against S. schenckii [30].
Here, we compared the ability of human granulocytes to stimulate cytokines, to uptake, and to form neutrophil extracellular traps (NETs) when interacting with conidia, yeast-like cells, or germlings from S. schenckii, S. brasiliensis, or S. globosa.Moreover, we also analyzed the contribution of some pattern recognition receptors and cell wall components during the interaction of these fungal cells with human granulocytes.
Strains and Culturing Conditions
Strains ATCC MYA-4821, ATCC MYA 4823, and FMR 9624 from S. schenckii, S. brasiliensis, and S. globosa, respectively, were used in this work.The three strains are clinical isolates previously characterized at the species level via molecular techniques and are reference strains for both genetic and phenotypic analyses [31][32][33][34][35]. Mycelia was grown in YPD broth, pH 4.5 (1% [w/v] yeast extract, 2% [w/v] gelatin peptone, and 3% [w/v] dextrose), at 28 • C. For solid plates, 2% [w/v] agar was included in the medium composition.After seven days of incubation on a solid medium, 10 mL of deionized water was added to detach conidia, and these were collected via aspiration and used for the induction of other morphologies or in interactions with human cells [36].To obtain germlings, conidia were incubated for 11-12 h in YPD, pH 4.5, at 28 • C and underwent shaking at 120 rpm, while dimorphism to yeast-like cells was induced by placing conidia in YPD, pH 7.8, and incubating them for four days at 37 • C and 120 rpm [20].All morphotypes were washed six times with chilled PBS and immediately used for cell wall modifications or interactions with human cells.To assess the contribution of cell wall glycans to the interactions with human cells, fungal cells from the three morphotypes were incubated with endoglycosidase H (New England Bio-Labs, Ipswich, MA, USA) or subjected to β-elimination to remove cell wall N-linked or O-linked glycans, respectively, using previously reported methodologies [37].For the artefactual exposure of the inner cell wall layer at the cell surface, cells were heat-killed (HK).For this, fungal cells were incubated at 60 • C for 2 h, and the absence of fungal growth was demonstrated by incubating HK cells in YPD plates, pH 4.5, at 28 • C for 5 days [20].
Ethics Statement
The use of human cells in this research was approved by Universidad de Guanajuato through its Ethics Committee.The approval reference given to this study is CEPIUG-P22-2022.Venous blood samples were withdrawn from healthy adult volunteers after information about the study was disclosed and written informed consent was signed.This study was conducted following the Declaration of Helsinki.
Isolation of Human Granulocytes
Venous blood samples were mixed with Histopaque-1077 (Sigma-Aldrich, Saint Louis, MO, USA), and cells were separated via differential centrifugation as reported elsewhere [38].The granulocytes/red blood cell phase at the bottom of the gradient were collected and suspended in 50 mL of lysis reagent (154.4 mM ammonium chloride, 10 mM potassium bicarbonate, and 97.3 mM EDTA tetrasodium salt) [39].Then, cells were suspended in RPMI-1640 Dutch modification (Sigma-Aldrich), and the concentration was adjusted at 5 × 10 6 cells mL −1 .Cells were inspected under bright light microscopy to assess degranulation, which was absent in all preparations.Under these conditions, 96.0 ± 0.3%, 3.0 ± 0.1%, and 1.0 ± 0.2% cells were neutrophils, eosinophils, and basophils, respectively.
Cytokine Stimulation
Interactions were performed in U-bottom 96-well microplates, in a total volume of 200 µL.Each well contained 5.0 × 10 5 granulocytes and 1.0 × 10 5 fungal cells.The plates were incubated for 24 h at 37 • C with 5% (v/v) CO 2 and centrifuged for 10 min at 1800× g at 4 • C, and supernatants were saved and kept at −20 • C until used.Secreted cytokines were quantified via ELISA using the Standard ABTS ELISA Development kits (Peprotech, Cranbury, NJ, USA) for human tumor necrosis factor-alpha (TNFα), interleukin 6 (IL-6), interleukin 8 (IL-8), and interleukin 10 (IL-10).Mock wells, where only human cells were included, were used as controls in all cytokine quantifications.The readings obtained from these control wells were deducted from all the experimental wells.
Cytokine Stimulation
For fungal labeling, cells were incubated with 1 mg mL −1 Acridine Orange (Sigma-Aldrich) for 30 min at room temperature, the excess dye was washed with PBS, and the cell concentration was adjusted at 3 × 10 7 cells mL −1 [43].Six-well plates were used to perform the interactions at an immune cell: fungus ratio of 1:6 in 800 µL DMEM medium (Sigma-Aldrich).The plates were incubated for 2 h at 37 • C and 5% (v/v) CO 2 [37], and immune cells were detached from plates with chilled PBS and incubated with 1.25 mg mL −1 Trypan Blue [44].The phagocytic event was analyzed via cytometry using a FACSCanto II system (Becton Dickinson, Franklin Lakes, NJ, USA).Fifty thousand events were collected per sample through the FL1 and FL2 channels, which were previously calibrated with non-labeled immune cells [37,43,44].Laminarin and the antibodies listed in Section 2.4 were used in preincubation experiments as described.
Analysis of Neutrophil Extracellular Traps
The analysis of NETs was performed as previously described [45], measuring the nucleic acids released into the extracellular compartment.Human granulocytes were suspended at a final concentration of 4 × 10 7 cells mL −1 in RPMI 1640, 175 µL was placed in the 96-well plates previously coated with 1% bovine serum albumin, and cells were incubated for 30 min at 37 • C and 5% CO 2 .Next, 25 µL of fungal cells adjusted at 4 × 10 8 cells mL −1 was added to the wells, and interactions were incubated for 4 h at 37 • C and 5% CO 2 .Then, the plates were centrifuged, and the supernatant was collected and used to quantify nucleic acids via spectrophotometry at 260 nm in a NanoDrop One (Thermo Fisher Scientific).As a negative control, human cells were incubated only with PBS, while as a positive control, neutrophils were incubated with yeast-like cells from C. albicans SC5314.Alternatively, after the cell-cell interactions, the supernatants were collected, and fungal cells were stained with 20 µg mL −1 calcofluor white (Sigma-Aldrich) for 30 min at room temperature.Then, cells were washed with PBS and laid down in Poly-L-lysine-coated slides, cells were fixed with 4% formaldehyde, and then cells were stained with 10 µg mL −1 ethidium bromide.Cells were inspected under fluorescent microscopy, using a Zeiss Axioscope-40 microscope equipped with an Axiocam MRc camera (Zeiss, Oberkochen, Germany).
Statistical Analysis
Analyses were performed in GraphPad Prism 6 software, using the Mann-Whitney U and Kruskal-Wallis tests, with a significance level set at p < 0.05.All experiments were carried out with samples from eight healthy donors assayed in duplicate.The results are shown as means and standard deviations.
Differential Cytokine Production by Human Granulocytes Stimulated with Conidia, Germlings, and Yeast-like Cells from Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa
Human granulocytes were co-incubated with cells from the three species under analysis, and secreted TNFα, IL-6, IL-8, and IL-10 were quantified using ELISA.We selected these cytokines because the main component of the granulocyte population was neutrophils, and these cytokines have been previously demonstrated to be highly produced by these immune cells during sepsis and interaction with different pathogens [26,46].Figure 1 shows the results of the cytokine quantification, and it is easy to see species-specific cytokine profiles.For conidia, the three species stimulated different levels of the four cytokines, with S. schenckii cells being associated with the highest levels of TNFα, IL-6, and IL-8, followed by S. globosa and S. brasiliensis (Figure 1).Contrary to this observation, the highest IL-10 levels were associated with S. brasiliensis conidia, followed by S. globosa and S. schenckii (Figure 1).For germlings, the three proinflammatory cytokines followed the same trend observed in conidia, but IL-10 stimulation was different, with the highest levels being found in the cells stimulated with S. globosa, followed by S. schenckii and S. brasiliensis (Figure 1).For yeast-like cells, once again, the highest levels of TNFα, IL-6, and IL-8 were associated with the cells stimulated with S. schenckii, while similar levels of the three cytokines were stimulated by both S. globosa and S. brasiliensis (Figure 1).The highest IL-10 levels were stimulated by S. brasiliensis yeast-like cells, followed by S. globosa and S. schenckii cells (Figure 1).There were also differences in the levels of cytokines stimulated when compared for each morphology and species.S. schenckii germlings and yeast-like cells stimulated similar levels of the four cytokines, but IL-6 was significantly higher than the other cytokines when conidia were used in the stimulations (Figure 1).For both S. brasiliensis conidia and yeast-like cells, IL-10 was significantly higher than the other cytokines, whilst no significant differences were observed in the cells stimulated with germlings.Finally, for the three S. globosa morphologies, the level of IL-10 was higher than that of the other three cytokines (Figure 1).Collectively, these data indicate that the interaction of granulocytes with Sporothrix cells is morphology-and species-specific.
germlings.Finally, for the three S. globosa morphologies, the level of IL-10 was higher t that of the other three cytokines (Figure 1).Collectively, these data indicate that the in action of granulocytes with Sporothrix cells is morphology-and species-specific.
Next, we assessed the contribution of some cell wall components and pattern rec nition receptors (PRRs) to the cytokine stimulation of the different Sporothrix morph gies.In all cases, we removed N-linked glycans via treatment with endoglycosidas (Endo H) [37,41], and we removed O-linked glycans via β-elimination [20,47] or inact tion with heat, as this treatment artifactually exposes inner cell wall components at cell surface, such as glucans and chitin [20,48,49].Under these treatments, S. schenckii nidia stimulated similar levels of TNFα, IL-6, and IL-8 in live cells, but IL-10 levels w increased upon β-elimination or in heat-killed (HK) cells (Figure 2A).In contrast, th treatments did not affect the cytokine profile stimulated by S. brasiliensis conidia un the cell-wall-perturbing treatments (Figure 2D).In the case of S. globosa conidia, the en H treatment positively affected IL-10 production, whilst β-elimination and HK cells s ulated higher levels of the four cytokines analyzed (Figure 2G).Next, we assessed the contribution of some cell wall components and pattern recognition receptors (PRRs) to the cytokine stimulation of the different Sporothrix morphologies.In all cases, we removed N-linked glycans via treatment with endoglycosidase H (Endo H) [37,41], and we removed O-linked glycans via β-elimination [20,47] or inactivation with heat, as this treatment artifactually exposes inner cell wall components at the cell surface, such as glucans and chitin [20,48,49].Under these treatments, S. schenckii conidia stimulated similar levels of TNFα, IL-6, and IL-8 in live cells, but IL-10 levels were increased upon β-elimination or in heat-killed (HK) cells (Figure 2A).In contrast, these treatments did not affect the cytokine profile stimulated by S. brasiliensis conidia under the cell-wallperturbing treatments (Figure 2D).In the case of S. globosa conidia, the endo-H treatment positively affected IL-10 production, whilst β-elimination and HK cells stimulated higher levels of the four cytokines analyzed (Figure 2G).Next, we took the levels of TNFα and IL-10, as signature cytokines of proinflammatory and anti-inflammatory responses, and used them to monitor the contribution of some PRRs to the stimulation of these cytokines.We blocked dectin-1 with the specific antago- Next, we took the levels of TNFα and IL-10, as signature cytokines of proinflammatory and anti-inflammatory responses, and used them to monitor the contribution of some PRRs to the stimulation of these cytokines.We blocked dectin-1 with the specific antagonist laminarin [22,23,50], whereas TLR2, TLR4, and complement receptor 3 (CR3), some of the main receptors found on the granulocyte surface [26,51], were blocked with specific monoclonal antibodies.TNFα stimulation by S. schenckii conidia was significantly dependent on TLR4 and CR3, and this dependency was partially lost in endoH and β-eliminated cells and lost in HK cells (Figure 2B).As compensation, cytokine production was in addition dependent on TLR2 in the case of endo H cells, and it was dependent on dectin-1 and TLR2 in β-eliminated and HK cells (Figure 2B).IL-10 production was significantly dependent on dectin-1 and TLR2, regardless of the treatment applied to S. schenckii conidia (Figure 2C).For S. brasiliensis conidia, TNFα stimulation occurred via TLR4 and CR3, but this changed when cells were treated with Endo H, β-eliminated, or treated with heat, with cytokine production occurring through dectin-1 and TLR2 in these three cases (Figure 2E).IL-10 production occurred on the four analyzed receptors when live conidia were used in the experiments, but upon endo-H, β-elimination, or inactivation by heat, IL-10 production occurred via dectin-1 and TLR2 (Figure 2F).In the case of S. globosa conidia, both TNFα and IL-10 production was dependent on dectin-1 and TLR2, regardless of the conidia treatment (Figure 2H,I).Control experiments, where human cells were preincubated with irrelevant antibodies, showed similar cytokine values to non-preincubated cells.
In the case of germlings, N-linked glycan trimming positively affected the cytokine production stimulated by S. schenckii and S. globosa cells, and in S. brasiliensis germlings, only IL-10 production was positively affected after treatment with endo H (Figure 3A,D,G).A similar trend was observed when these cells were β-eliminated or HK (Figure 2D).The removal of O-linked glycans from the S. schenckii germling did not affect cytokine production, but for S.globosa, the four cytokines significantly increased (Figure 3A,G).In both S. schenckii and S. globosa germlings, cytokine levels increased when HK cells were used for stimulation (Figure 3A,G).For the three species, TNFα stimulation was dependent on dectin-1 and TLR2, but in S. schenckii, it was also dependent on TLR4 and CR3 (Figure 3B,E,H).IL-10 was stimulated via dectin-1 and TLR2 for both S. schenckii and S. globosa germlings, but in S. brasiliensis, it was stimulated via TLR4 and CR3 (Figure 3C,F,I).In addition, endo-H, β-eliminated, and HK S. brasiliensis germlings stimulated IL-10 through dectin-1 (Figure 3F).Control experiments, where human cells were preincubated with irrelevant antibodies, showed similar cytokine values to non-preincubated cells.
When yeast-like cells were used in this kind of interaction, the modification of the cell wall by endo H treatment, β-elimination, or heat treatment positively affected the stimulation of TNFα, IL-6, IL-8, and IL-10 in the cases of S. schenckii and S. globosa (Figure 4A,G).S. brasiliensis yeast-like cells followed a similar trend, but in this case, IL-10 stimulation was not related to any of the treatments applied to fungal cells (Figure 4D).When the contribution of PRRs was analyzed, we found that TNFα and IL-10 stimulation by S. schenckii yeast-like cells was dectin-1-dependent (Figure 4B,C), but the former was also dependent on TLR2 (Figure 4B).Live and HK S. brasiliensis yeast-like cells stimulated TNFα via TLR4 and CR3, but this dependency was partially lost in endo H and β-eliminated cells, and there was additional involvement of dectin-1 and TLR2 (Figure 4E).IL-10 was dependent solely on dectin-1 though (Figure 4F).Finally, both TNFα and IL-10 production stimulated by S. globosa yeast-like cells was dectin-1-and TlR2-dependent (Figure 4I).Control experiments, where human cells were preincubated with irrelevant antibodies, showed similar cytokine values to non-preincubated cells.nated cells, and there was additional involvement of dectin-1 and TLR2 (Figure 4E).IL-10 was dependent solely on dectin-1 though (Figure 4F).Finally, both TNFα and IL-10 production stimulated by S. globosa yeast-like cells was dectin-1-and TlR2-dependent (Figure 4I).Control experiments, where human cells were preincubated with irrelevant antibodies, showed similar cytokine values to non-preincubated cells.
Differential Phagocytosis of Conidia and Yeast-Like Cells from Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa
Next, we analyzed the ability of human granulocytes to phagocyte these fungal cells.We omitted the analysis of germlings because of the technical limitations of our strategy to analyze uptake via cytometry, as this cell morphology is capable of clotting the internal piping of a flow cytometer [52].The strategy used here has been previously validated for the analysis of the uptake of conidia and blastoconidia, and, depending on the
Differential Phagocytosis of Conidia and Yeast-like Cells from Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa
Next, we analyzed the ability of human granulocytes to phagocyte these fungal cells.We omitted the analysis of germlings because of the technical limitations of our strategy to analyze uptake via cytometry, as this cell morphology is capable of clotting the internal piping of a flow cytometer [52].The strategy used here has been previously validated for the analysis of the uptake of conidia and blastoconidia, and, depending on the fluorescence associated with cells, these can be classified as in the early, intermediate, or late stage of phagocytosis [37,43].Here, no significant differences were observed in human cells in the early and intermediate stages of the phagocytosis of the conidia and yeast-like cells of the three fungal species under analysis (Figure 5).However, significant differences were observed in the late stage (Figure 5).S. schenckii conidia and yeast-like cells were the lesser phagocytosed cells, followed by both morphologies of S. brasiliensis and finally S. globosa cells, which were the most phagocytosed (Figure 5).In addition, the three species followed the same uptake trend, where yeast-like cells were more readily phagocytosed than conidia (Figure 5).Similar to our analysis of cytokine production, we also determined the contribution of some cell wall components and PRRs to the phagocytic process.Since the majority of granulocytes were in the late stage of phagocytosis in our experimental setting, we only analyzed cells at this stage.In the case of conidia, endo-H-treated cells and HK cells from S. schenckii and S. brasiliensis were more phagocytosed than live cells, but not when conidia were β-eliminated (Figure 6A).S. globosa conidia were similarly phagocytosed, regardless of the treatment applied to cells (Figure 6A).In all cell treatments, S. schenckii conidia phagocytosis was dependent on both dectin-1 and CR3, but in endo-H-treated cells, this dependency was diminished when compared to live and other treated cells (Figure 6B).Live, β-eliminated, and HK S. brasiliensis conidia were phagocytosed via TLR4 and CR3, but the uptake of endo-H-treated cells occurred via dectin-1 and CR3 (Figure 6C).Finally, S. globosa conidia was phagocytosed via dectin-1, regardless of the treatment applied to cells (Figure 6D).Granulocytes preincubated with irrelevant antibodies showed a similar uptake ability to non-preincubated cells.When yeast-like cells were used in the interactions with granulocytes, we found that S. schenckii and S. brasiliensis cells were more phagocytosed when treated with endo H or with heat than live cells, but β-eliminated cells showed lower levels of uptake than the live control cells (Figure 7A).None of the treatments applied to fungal cells affected the ability of human granulocytes to phagocyte S. globosa yeast-like cells (Figure 7A).The uptake of S. schenckii yeast-like cells by human granulocytes was dependent on dectin-1, TLR2, TLR4, and CR3 in both live and endo-H treated cells (Figure 7A).However, in the case of β-eliminated cells, the uptake occurred via dectin-1 and TLR2, while HK cells were phagocytosed through dectin-1 and CR3 (Figure 7B).
Live S. brasiliensis yeast-like cells were phagocytosed via TLR4 and CR3, and these receptors, along with dectin-1, participated in the phagocytosis of endo-H-treated S. brasiliensis cells (Figure 7B).Both dectin-1 and TLR2 participated in the uptake of β-eliminated yeast-like cells, whereas the four receptors under analysis participated in the phagocytosis of these cells (Figure 7C).Similar to conidia, the S. globosa yeast-like cells were phagocytosed through a dectin-1-dependent mechanism, regardless of the cell treatment applied to fungal cells (Figure 7D).Granulocytes preincubated with irrelevant antibodies showed a similar uptake ability to non-preincubated cells.In (A), human granulocytes were incubated for 2 h at 37 • C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, conidia treated with endoglycosidase H; β-Elimin, conidia treated by β-elimination; HK, heat-killed conidia.In (A), * p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the no-treatment condition of the same strain.In B, experiments were performed with S. schenckii conidia.In (C), experiments were performed with S. brasiliensis conidia, while in (C), S. globosa conidia were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
When yeast-like cells were used in the interactions with granulocytes, we found that S. schenckii and S. brasiliensis cells were more phagocytosed when treated with endo H or with heat than live cells, but β-eliminated cells showed lower levels of uptake than the live control cells (Figure 7A).None of the treatments applied to fungal cells affected the ability of human granulocytes to phagocyte S. globosa yeast-like cells (Figure 7A).The uptake of S. schenckii yeast-like cells by human granulocytes was dependent on dectin-1, TLR2, TLR4, and CR3 in both live and endo-H treated cells (Figure 7A).However, in the case of β-eliminated cells, the uptake occurred via dectin-1 and TLR2, while HK cells were phagocytosed through dectin-1 and CR3 (Figure 7B).
Live S. brasiliensis yeast-like cells were phagocytosed via TLR4 and CR3, and these receptors, along with dectin-1, participated in the phagocytosis of endo-H-treated S. brasiliensis cells (Figure 7B).Both dectin-1 and TLR2 participated in the uptake of β-eliminated yeastlike cells, whereas the four receptors under analysis participated in the phagocytosis of these cells (Figure 7C).Similar to conidia, the S. globosa yeast-like cells were phagocytosed through a dectin-1-dependent mechanism, regardless of the cell treatment applied to fungal cells (Figure 7D).Granulocytes preincubated with irrelevant antibodies showed a similar uptake ability to non-preincubated cells.In (A), human granulocytes were preincubated with yeast-like cells and incubated for 2 h at 37 °C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In A, *p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the notreatment condition of the same strain.In (B), experiments were performed with S. schenckii yeastlike cells.In (C), experiments were performed with S. brasiliensis yeast-like cells, while in (C), S. globosa yeast-like cells were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Stimulation of Neutrophil Extracellular Traps by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa
Since neutrophils are the most abundant cell population in our granulocyte preparations, we next analyzed the ability of the fungal cells to stimulate NETs.We indirectly measured the ability to stimulate these traps by quantifying the nucleic acids released into the extracellular compartment, as this area is the main component of NETs [45].Both conidia and germlings from the three species showed a similar ability to stimulate NETs (Figure 8B).On the contrary, yeast-like cells showed an increased ability to stimulate In (A), human granulocytes were preincubated with yeast-like cells and incubated for 2 h at 37 • C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In (A), * p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the no-treatment condition of the same strain.In (B), experiments were performed with S. schenckii yeast-like cells.In (C), experiments were performed with S. brasiliensis yeast-like cells, while in (C), S. globosa yeast-like cells were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Stimulation of Neutrophil Extracellular Traps by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa
Since neutrophils are the most abundant cell population in our granulocyte preparations, we next analyzed the ability of the fungal cells to stimulate NETs.We indirectly measured the ability to stimulate these traps by quantifying the nucleic acids released into the extracellular compartment, as this area is the main component of NETs [45].Both conidia and germlings from the three species showed a similar ability to stimulate NETs (Figure 8B).On the contrary, yeast-like cells showed an increased ability to stimulate NETs, but S. schenckii and S. globosa were better stimuli than S. brasiliensis yeast-like cells (Figure 8A).Control cells only incubated with PBS released 12.9 ± 1.4 ng µL −1 nucleic acids, while human granulocytes incubated with C. albicans cells released 45.8 ± 8.8 ng µL −1 nucleic acids.These data indicate that yeast-like cells from the three Sporothrix species are better stimulants than the positive control, C. albicans.Since this morphology showed a high ability to stimulate NETs, we focused only on these cells and assessed the contribution of cell wall components.Endo H and β-eliminated cells from the three species showed a lower ability to stimulate NETs than live cells, but the lack of O-linked glycans was markedly different from the cells lacking N-linked glycans (Figure 8B).On the contrary, HK cells from the three fungal species showed an increased ability to stimulate NETs, but S. schenckii cells were a better stimulus than the other two fungal species (Figure 8B).Representative images of the NETS stimulated with yeast-like cells are shown in Figure 9.
a lower ability to stimulate NETs than live cells, but the lack of O-linked glycans w markedly different from the cells lacking N-linked glycans (Figure 8B).On the contra HK cells from the three fungal species showed an increased ability to stimulate NETs, b S. schenckii cells were a better stimulus than the other two fungal species (Figure 8B).Re resentative images of the NETS stimulated with yeast-like cells are shown in Figure 9.
Discussion
The study of the interaction between Sporothrix and human granulocytes remains scarce, despite the relevant roles of these immune cells in the first line of defense.Even though neutrophils were the main type of human cells in our preparations, formally, we cannot directly link our results solely to this type of immune cell.Here, we observed that the three S. schenckii morphologies showed the highest ability to stimulate proinflammatory cytokines, whereas the lowest levels were associated with S. brasiliensis.To the best
Discussion
The study of the interaction between Sporothrix and human granulocytes remains scarce, despite the relevant roles of these immune cells in the first line of defense.Even though neutrophils were the main type of human cells in our preparations, formally, we cannot directly link our results solely to this type of immune cell.Here, we observed that the three S. schenckii morphologies showed the highest ability to stimulate proinflammatory cytokines, whereas the lowest levels were associated with S. brasiliensis.To the best of our knowledge, this is the first report of cytokine profiles stimulated by Sporothrix when interacting with human granulocytes.Interestingly, this cytokine profile is similar to that previously observed when these fungal cells interacted with human PBMCs and human monocyte-derived macrophages [20,22,23].This trend, however, was not observed in the case of IL-10, which was highly stimulated by S. globosa cells interacting with human PBMCs and monocyte-derived macrophages [20,22,23], but here, S. brasiliensis was the species that stimulated the highest levels of this cytokine.Thus, these data suggest a basic response core in these human immune cells that has type-specific response signatures.In support of this, our cytokine profile with the three fungal species is different from that reported with Sporothrix cells interacting with human dendritic cells, where S. globosa was the most potent stimulant of proinflammatory cytokines [23].
Both N-linked and O-linked glycans were dispensable for proinflammatory cytokine stimulation by S. schenckii conidia, and they seem to play a masking role of inner wall components, as cells lacking any of these compounds stimulated higher IL-10 levels with no effect on proinflammatory cytokines.However, the role of these wall components is not as passive as mentioned, because in live S. schenckii and S. brasiliensis conidia cells, the main PRRs receptors involved in TNFα stimulation were TLR4 and CR3, which recognize rhamnose-containing glycans [21].Proinflammatory stimulation was maintained in the system lacking N-linked or O-linked glycans because of the shifting of PRRs to dectin-1 and TLR2.In S. globosa conidia, this masking-only role of glycans is possible to conceive, because the removal of the compounds positively affected cytokine production, which was dectin-1-and TLR2-dependent in all the tested conditions.Interestingly, a similar PRR dependency was recently reported for cytokine production in human monocyte-derived macrophages [23].
In the case of germlings, S. globosa with no N-linked or O-linked glycans on the surface were better stimulants for cytokines than the untreated cells, but in all cases, stimulation occurred in a pathway dependent on dectin-1 and TLR2, suggesting that β-1,3glucan is the main cell wall pathogen-associated molecular pattern involved in cytokine stimulation.These data are in line with previous cell wall characterization data that indicate that S. globosa has more β-1,3-glucans exposed at the cell surface than S. schenckii or S. brasiliensis [22,53].IL-10 stimulation by S. brasiliensis germlings increased in cells lacking N-linked or O-linked glycans, but dependency on receptors changed from TLR4 and CR3 in nontreated cells to these receptors as well as dectin-1 in treated cells, suggesting that, for this species, glycans contribute to the stimulation of this anti-inflammatory cytokine, a different observation when compared to S. schenckii and S. globosa.
Contrary to the other morphologies, when yeast-like cells stimulated cytokines, these cytokines increased when cells were HK, β-eliminated, or treated with endo H, indicating that cell wall perturbations positively affected the immune sensing of the three fungal species by human granulocytes.Moreover, these results reinforce the idea that the cell walls of these species have morphology-and species-specific organization and composition [20,22,53,54].Dectin-1 and TLR2 receptors were involved in cytokine stimulation by S. schenckii and S. globosa cells, suggesting a key role of β-1,3-glucans in sensing by granulocytes.These receptors were also involved in TNFα stimulation by S. brasiliensis yeast-like cells, but this role was shared with TLR4 and CR3, which is in line with observations in human PBMCs, where CR3 plays a differential role in the sensing of S. schenckii and S. brasiliensis yeast-like cells [21].Our results related to yeast-like cells contrast with those previously reported, where dectin-1 was found to be dispensable for the clearance of S. schenckii in an experimental model of sporotrichosis [55]; however, they are in line with recent observations that have placed dectin-1 as a central component of anti-Sporothrix innate immunity [20,22,23,56].
Regarding phagocytosis, here, yeast-like cells were more phagocytosed than conidia, an observation similar to that in previous studies dealing with S. schenckii [29]; however, S. globosa cells were more readily phagocytosed than the other species.Since β-1,3-glucandectin-1 interaction is one of the main players in fungal uptake by macrophages, including C. albicans and Sporothrix species [23,57], it is possible to suggest that this increased uptake of S. globosa cells may be related to the high β-1,3-glucan levels exposed at the cell surface [22,53].This is further supported by the fact that the perturbation of conidia and yeast-like cell walls did not affect the fungal uptake, and the sole receptor found to be involved in phagocytosis was dectin-1.Similarly, dectin-1 was also involved in the phagocytosis of S. schenckii conidia and yeast-like cells, but like in the cytokine stimulation, this dependence was not observed for S. brasiliensis.Instead, TLR4 and CR3 were the main players for the phagocytosis of both conidia and yeast-like cells.
Thus far, NETs stimulated by Sporothrix cells are a subject scarcely studied, and there is only one report about NET stimulation by S. globosa cells [58].Our result indicates that both conidia and yeast-like cells are capable of inducing NETs, with the latter being a better stimulant, even better than C. albicans cells, which is in line with the observation of Sporothrix extracellular components having a better ability to stimulate reactive oxygen species than C. albicans cells [30].Interestingly, the loss of O-linked glycans significantly reduced the ability to stimulate NETs in the three fungal species, suggesting that this cell wall component is a major player in NET stimulation.Since, in endo-H-treated cells, NET stimulation was also reduced, but not at the levels related to β-eliminated cells, it is possible to suggest that both N-linked and O-linked glycans are relevant for NET stimulation, likely through a costimulatory pathway, as has been described in cytokine stimulation in other fungal pathogens [48,50,[59][60][61].Since NET formation increased in HK cells, the involvement of β-1,3-glucan via dectin-1 is also likely.
In conclusion, we report here that the morphologies of S. schenckii, S. brasiliensis, and S. globosa play a role during the interaction with human granulocytes, generating morphologyand species-specific cytokine profiles.Nevertheless, S. brasiliensis tended to stimulate an antiinflammatory cytokine profile, whilst the other two species had a proinflammatory response.S. globosa cells were the most phagocytosed cells, which occurred through a dectin-1-dependent mechanism, while the uptake of S. brasiliensis mainly occurred via TLR4 and CR3.The N-linked and O-linked glycans and β-1,3-glucans are cell wall components that play a significant role in the interaction of these Sporothrix species with human granulocytes.
Figure 1 .
Figure 1.Cytokine production by human granulocytes co-incubated with conidia, germlings, or yeastlike cells from Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa.Human granulocytes and fungal cells were co-incubated for 24 h; the supernatants were saved and used to determine the levels of secreted cytokines via ELISA.* p < 0.05 when compared with cytokines stimulated by S. schenckii or S. globosa.** p < 0.05 when compared with cytokines stimulated by S. schenckii.† p < 0.05 when compared with the cytokine levels of the same morphology and the same species.C, conidia; G, germlings; Y, yeast-like cells.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
cells were co-incubated for 24 h; the supernatants were saved and used to determine the levels of secreted cytokines via ELISA.* p < 0.05 when compared with cytokines stimulated by S. schenckii or S. globosa.** p < 0.05 when compared with cytokines stimulated by S. schenckii.† p < 0.05 when compared with the cytokine levels of the same morphology and the same species.C, conidia; G, germlings; Y, yeast-like cells.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 2 .
Figure 2. Cytokine stimulation by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa conidia interacting with human granulocytes.In (A,D,G), human cells were co-incubated with conidia for 24 h at 37 °C and 5% (v/v) CO2; supernatants were saved and used for TNFα, IL-6, IL-8, and IL-10 quantification.In (B,C,E,F,H,I), human cells were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-TLR2, anti-TLR4, or anti-complement receptor 3 (CR3), before co-incubation with conidia.No treatment, cells preincubated with PBS.In all cases, 100% corresponds to the system with no treatment, and the absolute values are like those shown in panels (A,D,G).Endo-H, conidia treated with endoglycosidase H; β-Elimin, conidia subjected to β-elimination; HK, fungal cells inactivated by heat.Panels (A-C) correspond to Sporothrix schenckii conidia; (D-F) correspond to Sporothrix brasiliensis conidia; and (G-I) correspond to Sporothrix globosa conidia.In (A,D,G), * p < 0.05 when compared to cytokine levels stimulated by live cells.In (B,C,E,F,H,I), * p < 0.05 when compared to the no-treatment condition of the same strain.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 2 .
Figure 2. Cytokine stimulation by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa conidia interacting with human granulocytes.In (A,D,G), human cells were co-incubated with conidia for 24 h at 37 • C and 5% (v/v) CO 2 ; supernatants were saved and used for TNFα, IL-6, IL-8, and IL-10 quantification.In (B,C,E,F,H,I), human cells were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-TLR2, anti-TLR4, or anticomplement receptor 3 (CR3), before co-incubation with conidia.No treatment, cells preincubated with PBS.In all cases, 100% corresponds to the system with no treatment, and the absolute values are like those shown in panels (A,D,G).Endo-H, conidia treated with endoglycosidase H; β-Elimin, conidia subjected to β-elimination; HK, fungal cells inactivated by heat.Panels (A-C) correspond to Sporothrix schenckii conidia; (D-F) correspond to Sporothrix brasiliensis conidia; and (G-I) correspond to Sporothrix globosa conidia.In (A,D,G), * p < 0.05 when compared to cytokine levels stimulated by live cells.In (B,C,E,F,H,I), * p < 0.05 when compared to the no-treatment condition of the same strain.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 3 .
Figure 3. Cytokine stimulation by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa germlings interacting with human granulocytes.In (A,D,G), human cells were co-incubated with germlings for 24 h at 37 • C and 5% (v/v) CO 2 ; supernatants were saved and used for TNFα, IL-6, IL-8, and IL-10 quantification.In (B,C,E,F,H,I), human cells were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-TLR2, anti-TLR4, or anti-complement receptor 3 (CR3), before co-incubation with conidia.No treatment, cells preincubated with PBS.In all cases, 100% corresponds to the system with no treatment, and the absolute values are like those shown in panels (A,D,G).Endo-H, germlings treated with endoglycosidase H; β-Elimin, germlings subjected to β-elimination; HK, fungal cells inactivated by heat.Panels (A-C) correspond to Sporothrix schenckii germlings; (D-F) correspond to Sporothrix brasiliensis germlings; and (G-I) correspond to Sporothrix globosa germlings.In (A,D,G), * p < 0.05 when compared to cytokine levels stimulated by live cells.In (B,C,E,F,H,I), * p < 0.05 when compared to the no-treatment condition of the same strain.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 4 .
Figure 4. Cytokine stimulation by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa8 yeast-like cells interacting with human granulocytes.In (A,D,G), human cells were co-incubated with yeast-like cells for 24 h at 37 °C and 5% (v/v) CO2; supernatants were saved and used for TNFα, IL-6, IL-8, and IL-10 quantification.In (B,C,E,F,H,I), human cells were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-TLR2, anti-TLR4, or anti-complement receptor 3 (CR3), before co-incubation with yeast-like cells.No treatment, cells preincubated with PBS.In all cases, 100% corresponds to the system with no treatment, and the absolute values are like those shown in panels (A,D,G).Endo-H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells subjected to β-elimination; HK, fungal cells inactivated by heat.Panels (A-C) correspond to Sporothrix schenckii yeast-like cells; (D-F) correspond to Sporothrix brasiliensis yeast-like cells; and (G-I) correspond to Sporothrix globosa yeast-like cells.In (A,D,G), * p < 0.05 when compared to cytokine levels stimulated by live cells.In (B,C,E,F,H,I), * p < 0.05 when compared to the no-treatment condition of the same strain.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 4 .
Figure 4. Cytokine stimulation by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa yeast-like cells interacting with human granulocytes.In (A,D,G), human cells were co-incubated with yeast-like cells for 24 h at 37 • C and 5% (v/v) CO 2 ; supernatants were saved and used for TNFα, IL-6, IL-8, and IL-10 quantification.In (B,C,E,F,H,I), human cells were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-TLR2, anti-TLR4, or anti-complement receptor 3 (CR3), before co-incubation with yeast-like cells.No treatment, cells preincubated with PBS.In all cases, 100% corresponds to the system with no treatment, and the absolute values are like those shown in panels (A,D,G).Endo-H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells subjected to β-elimination; HK, fungal cells inactivated by heat.Panels (A-C) correspond to Sporothrix schenckii yeast-like cells; (D-F) correspond to Sporothrix brasiliensis yeast-like cells; and (G-I) correspond to Sporothrix globosa yeast-like cells.In (A,D,G), * p < 0.05 when compared to cytokine levels stimulated by live cells.In (B,C,E,F,H,I), * p < 0.05 when compared to the no-treatment condition of the same strain.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 5 .
Figure 5. Phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa co and yeast-like cells by human granulocytes.Fungal and human cells interacted for 2 h at 37 5% (v/v) CO2 before human cells were analyzed using flow cytometry.Cells were selected f quantification when interacting with at least one fungal cell.* p < 0.05 when compared to co from the same species.† p < 0.05 when compared with the same morphology of the other tw gal species.Data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 5 .
Figure 5. Phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa conidia and yeast-like cells by human granulocytes.Fungal and human cells interacted for 2 h at 37• C and 5% (v/v) CO 2 before human cells were analyzed using flow cytometry.Cells were selected for quantification when interacting with at least one fungal cell.* p < 0.05 when compared to conidia from the same species.† p < 0.05 when compared with the same morphology of the other two fungal species.Data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 6 .
Figure 6.Contribution of cell wall components and pattern recognition receptors to the phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa conidia by human granulocytes.In (A), human granulocytes were incubated for 2 h at 37 °C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, conidia treated with endoglycosidase H; β-Elimin, conidia treated by β-elimination; HK, heat-killed conidia.In (A), * p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the no-treatment condition of the same strain.In B, experiments were performed with S. schenckii conidia.In (C), experiments were performed with S. brasiliensis conidia, while in (C), S. globosa conidia were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 6 .
Figure 6.Contribution of cell wall components and pattern recognition receptors to the phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa conidia by human granulocytes.In (A), human granulocytes were incubated for 2 h at 37 • C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, conidia treated with endoglycosidase H; β-Elimin, conidia treated by β-elimination; HK, heat-killed conidia.In (A), * p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the no-treatment condition of the same strain.In B, experiments were performed with S. schenckii conidia.In (C), experiments were performed with S. brasiliensis conidia, while in (C), S. globosa conidia were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 7 .
Figure 7. Contribution of cell wall components and pattern recognition receptors to the phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa yeast-like cells by human granulocytes.In (A), human granulocytes were preincubated with yeast-like cells and incubated for 2 h at 37 °C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In A, *p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the notreatment condition of the same strain.In (B), experiments were performed with S. schenckii yeastlike cells.In (C), experiments were performed with S. brasiliensis yeast-like cells, while in (C), S. globosa yeast-like cells were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 7 .
Figure 7. Contribution of cell wall components and pattern recognition receptors to the phagocytosis of Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa yeast-like cells by human granulocytes.In (A), human granulocytes were preincubated with yeast-like cells and incubated for 2 h at 37 • C, and phagocytosis was analyzed using flow cytometry.In (B-D), human granulocytes were preincubated with 200 µg mL −1 laminarin or 10 µg mL −1 of any of the following antibodies: anti-CR3, anti-TLR2, or anti-TLR4.Then, phagocytosis was analyzed as described in the Materials and Methods Section.All the interactions were performed in the presence of 5 µg mL −1 polymyxin B. No treatment, cells preincubated with PBS.Results correspond to cells in the late stage of phagocytosis.For all cases, 100% corresponds to human cells preincubated with PBS, and the absolute values are similar to those shown in panel (A).Endo H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In (A), * p < 0.05 when compared to live cells.In (B-D), * p < 0.05 when compared to the no-treatment condition of the same strain.In (B), experiments were performed with S. schenckii yeast-like cells.In (C), experiments were performed with S. brasiliensis yeast-like cells, while in (C), S. globosa yeast-like cells were used.In all panels, data are shown as means ± SD from eight donors analyzed in duplicate.
Figure 8 .
Figure 8. Stimulation of neutrophil extracellular traps by Sporothrix schenckii, Sporothrix brasiliens and Sporothrix globosa.In (A), human granulocytes and conidia, yeast-like cells or germlings of S schenckii, S. brasiliensis, or S. globosa were placed in a MOI 1:10 and incubated for 4 h at 37 °C and 5% CO2.Then, plates were centrifuged, and supernatants were used to quantify nucleic acids by reading absorbance at 260 nm.In (B), similar experiments to those described in panel (A) were performed but only using yeast-like cells.Endo H, yeast-like cells treated with endoglycosidase β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In (A), * p < 0.0 when compared to conidia or germlings; † p < 0.05 when compared with cells of the same morphology.In (B), * p < 0.05 when compared to live cells; † p < 0.05 when compared with cells of th same morphology.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 8 .
Figure 8. Stimulation of neutrophil extracellular traps by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa.In (A), human granulocytes and conidia, yeast-like cells or germlings of S. schenckii, S. brasiliensis, or S. globosa were placed in a MOI 1:10 and incubated for 4 h at 37 • C and 5% CO 2 .Then, plates were centrifuged, and supernatants were used to quantify nucleic acids by reading absorbance at 260 nm.In (B), similar experiments to those described in panel (A) were performed but only using yeast-like cells.Endo H, yeast-like cells treated with endoglycosidase H; β-Elimin, yeast-like cells treated by β-elimination; HK, heat-killed yeast-like cells.In (A), * p < 0.05 when compared to conidia or germlings; † p < 0.05 when compared with cells of the same morphology.In (B), * p < 0.05 when compared to live cells; † p < 0.05 when compared with cells of the same morphology.Results are shown as mean ± standard deviation from data generated with samples from eight donors analyzed in duplicate.
Figure 9 .
Figure 9. Representative images of neutrophil extracellular traps stimulated by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa yeast-like cells.Human granulocytes and yeast-like cells of S. schenckii, S. brasiliensis, or S. globosa were placed in a MOI 1:10 and incubated for 4 h at 37 °C and 5% CO2.DNA was stained with ethidium bromide (panels A,B), while fungal cells were labeled with calcofluor white (C).Panel (A) corresponds to non-stimulated human granulocytes that were used as controls.Panels (B,C) correspond to the ethidium bromide and ethidium bromide plus calcofluor with staining, respectively.Scale bars = 20 µm.
Figure 9 .
Figure 9. Representative images of neutrophil extracellular traps stimulated by Sporothrix schenckii, Sporothrix brasiliensis, and Sporothrix globosa yeast-like cells.Human granulocytes and yeast-like cells of S. schenckii, S. brasiliensis, or S. globosa were placed in a MOI 1:10 and incubated for 4 h at 37 • C and 5% CO 2 .DNA was stained with ethidium bromide (panels A,B), while fungal cells were labeled with calcofluor white (C).Panel (A) corresponds to non-stimulated human granulocytes that were used as controls.Panels (B,C) correspond to the ethidium bromide and ethidium bromide plus calcofluor with staining, respectively.Scale bars = 20 µm. | 2023-10-06T15:21:46.763Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "1d16b841b33a55c9c9fe0b7b6d41f657b7843d74",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/9/10/986/pdf?version=1696390659",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "58caf4f1fdb50d0be22afd6eb13576348f67821c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219087690 | pes2o/s2orc | v3-fos-license | The Role of Grain Boundaries on Ionic Defect Migration in Metal Halide Perovskites
Halide perovskites have rapidly become an attractive class of materials for optoelectronic applications, including photovoltaics,[1] light-emitting diodes (LEDs),[2] light detection,[3] and energy storage.[4] The stoichiometric unit of a halide perovskite with standard formula ABX3 consists of either one organic or inorganic monovalent ion (A+) placed in a cage made up of eight corner-sharing octahedra, each containing one divalent metal ion (B2+) (typically Pb or Sn), and six halides (X−). All three ionic constituents may give rise to defects in the form of vacancies, interstitials, or antisite substitutions.[5] Ionic defects have been shown to be very mobile at room temperature,[6] being able to migrate within the material (lattice) when subjected to temperature or defect concentration gradients, as well as external stimuli such as light or an electric field.[7,8] As this gives rise to substantial transient Halide perovskites are emerging as revolutionary materials for optoelectronics. Their ionic nature and the presence of mobile ionic defects within the crystal structure have a dramatic influence on the operation of thin-film devices such as solar cells, light-emitting diodes, and transistors. Thin films are often polycrystalline and it is still under debate how grain boundaries affect the migration of ions and corresponding ionic defects. Laser excitation during photoluminescence (PL) microscopy experiments leads to formation and subsequent migration of ionic defects, which affects the dynamics of charge carrier recombination. From the microscopic observation of lateral PL distribution, the change in the distribution of ionic defects over time can be inferred. Resolving the PL dynamics in time and space of single crystals and thin films with different grain sizes thus, provides crucial information about the influence of grain boundaries on the ionic defect movement. In conjunction with experimental observations, atomistic simulations show that defects are trapped at the grain boundaries, thus inhibiting their diffusion. Hence, with this study, a comprehensive picture highlighting a fundamental property of the material is provided while also setting a theoretical framework in which the interaction between grain boundaries and ionic defect migration can be understood.
Introduction
Halide perovskites have rapidly become an attractive class of materials for optoelectronic applications, including photovoltaics, [1] light-emitting diodes (LEDs), [2] light detection, [3] and energy storage. [4] The stoichiometric unit of a halide perovskite with standard formula ABX 3 consists of either one organic or inorganic monovalent ion (A + ) placed in a cage made up of eight corner-sharing octahedra, each containing one divalent metal ion (B 2+ ) (typically Pb or Sn), and six halides (X − ). All three ionic constituents may give rise to defects in the form of vacancies, interstitials, or antisite substitutions. [5] Ionic defects have been shown to be very mobile at room temperature, [6] being able to migrate within the material (lattice) when subjected to temperature or defect concentration gradients, as well as external stimuli such as light or an electric field. [7,8] As this gives rise to substantial transient Halide perovskites are emerging as revolutionary materials for optoelectronics. Their ionic nature and the presence of mobile ionic defects within the crystal structure have a dramatic influence on the operation of thin-film devices such as solar cells, light-emitting diodes, and transistors. Thin films are often polycrystalline and it is still under debate how grain boundaries affect the migration of ions and corresponding ionic defects. Laser excitation during photoluminescence (PL) microscopy experiments leads to formation and subsequent migration of ionic defects, which affects the dynamics of charge carrier recombination. From the microscopic observation of lateral PL distribution, the change in the distribution of ionic defects over time can be inferred. Resolving the PL dynamics in time and space of single crystals and thin films with different grain sizes thus, provides crucial information about the influence of grain boundaries on the ionic defect movement. In conjunction with experimental observations, atomistic simulations show that defects are trapped at the grain boundaries, thus inhibiting their diffusion. Hence, with this study, a comprehensive picture highlighting a fundamental property of the material is provided while also setting a theoretical framework in which the interaction between grain boundaries and ionic defect migration can be understood.
phenomena in the electrical response of a device, a considerable portion of the literature to date discusses the challenges arising from ionic defects created during device operation and their migration. Not only does ionic migration play a crucial role in the device's current-voltage hysteresis, [9,10] but is also attributed the underpinning cause for both reversible and irreversible degradation of the perovskite absorber. [11,12] Since ionic defect formation, and consequent ionic defect movement, are evidently an integral part of the operation and stability of perovskite-based optoelectronic devices, it is imperative to continue gaining an understanding of the migration dynamics.
Ion migration relates closely to the defect chemistry of the material. [13] Theoretical calculations [13,14] and experimental work [15] support the hypothesis that the most abundant defects in methylammonium lead iodide (MAPbI 3 ) perovskites are iodide vacancies (V I • ) and interstitials (I′ i ) as these have the lowest formation energies and are likely generated as Frenkel pairs. Ionic defect migration has been proposed to occur via efficient vacancy hopping or interstitial kick-off mechanisms. [10] One of the leading questions of the experiments presented here is the effect of grain boundaries (GBs) on ionic migration in polycrystalline films, as these cause a discontinuation of the crystal lattice with substantial defect density. This is of particular relevance for solution-processed perovskite thin films, which typically contain a large number of GBs.
The exact interplay of GBs and ionic motion is still under discussion in the literature. Some suggest that ionic defect migration could be facilitated by GBs, [16,17] which is based on observations of a more considerable hysteresis at the GBs compared to the grain interior, as concluded by atomic force microscopy experiments. [16] Moreover, the activation energies for ions to migrate from their proper places in the lattice are reportedly higher in single crystals compared to thin films with smaller grains (≈300 nm). [18] This may be due to the increased difficulty for ions to move in the absence of GBs because of the lack of ionic vacancies. On the other hand, others and some of us have observed a reduction in hysteresis for devices containing an absorber layer with larger grain size. [19,20] It was concluded from intensity-modulated photocurrent spectroscopy that ionic defect movement is faster when the number of GBs is reduced, [19] which points to GBs inhibiting their migration. Another recent report has shown that even though mobile ions is a prerequisite for hysteresis, the trapping and detrapping of ionic defects plays an important role for slow transients seen in devices. [21] To extend our understanding about how the microstructure influences ionic defect migration, we recognize the need for direct observations of the migration dynamics and for gaining a spatially resolved microscopic picture. Photoluminescence (PL) microscopy offers the opportunity to study the light absorber material while excluding the influence of other contact layers or metal electrodes that are typically used in optoelectronic devices. Hence, PL microscopy has particularly been used to detect the effect of defect migration in halide perovskites. [8,22,23] One characteristic signature is the change in relative PL quantum yield, which is directly related to the rate of radiative versus nonradiative recombination, which in turn is dictated by the local defect chemistry (and density), which can be altered by light. [24,25] It has been widely reported that light can drive out the iodide content and change the PL yield, or in other words, iodide defects have been shown to alter the PL yield in halide perovskites. [22,26,27] With PL microscopy we can induce local defect formation and migration while to some extent resolving the microstructure of the material, thus allowing us to correlate the dynamics of defect diffusion to the presence or absence of GBs.
In this study, PL microscopy is used as a means to induce ionic defect migration with the excitation beam, consecutively track the motion (redistribution) of defects in real-time through the resulting fluctuation in PL intensity, and observe the extent to which the material recovers from the limited perturbation state. We opted to use MAPbI 3 to limit the possible combinations of constituent ions and their associated defects. We observe changes in the PL signal with subsecond time resolution (50 ms), which allows us to directly resolve defect-induced changes in the optoelectronic properties of the material relating to ionic motion. By correlating PL microscopy and energy dispersive X-ray spectroscopy (EDX), we confirm that light promotes ionic diffusion out of the excited spot, creating additional defects in the material. By monitoring the dynamics of the relative PL quantum yield over time and space in MAPbI 3 crystals and films of different grain sizes, we can investigate the impact of GBs on the defect migration. With the support of atomistic simulations, we conclude that the GBs inhibit the lateral movement of defects in the material, i.e., their spreading across the sample, which we also demonstrate with devices comprised of perovskite absorbers of different grain sizes. Here we show that devices are required to have monolithic grains in order to have fast ionic response. The fast transient response in the device can potentially reduce the reversible losses due to slow ion migration seen in devices under operational conditions. [11] Our study gives a broader understanding of how ionic defect migration in halide perovskite relates to the microstructure of the material.
Photoinduced Ion Migration
To establish the link between PL yield and defect migration, we expose a MAPbI 3 thin film fabricated by a solution-based onestep method [28] (see the Experimental Section for more details) to 1 min of continuous wave laser excitation (450 nm). After exposure with focused excitation, we switch to a wide-field excitation to obtain a PL map and observe that the region exposed to the focused laser has a significantly reduced PL yield (Figure 1a). From the scanning electron microscope (SEM) micrograph of the same area Figure 1b 1 , it appears as if the region has structurally collapsed to form what resembles a crater. Furthermore, it becomes evident from the EDX images ( Figure 1b [2][3][4] ) that there is a significant ionic redistribution. Most obvious is the almost complete lack of iodide (Figure 1b 2 ), which is in agreement with previous reports on halide migration away from the light exposed spot. [22,26,27] There is also no indication that it has accumulated elsewhere, although it is possible that the EDX image resolution prevents a clear representation of the elemental distribution outside the illuminated region. Figure 1b 3 shows a slight reduction of the C signal after laser excitation, which could imply a removal of the methylammonium cation (CH 3 NH 3 + ) due to laser excitation. The signal for Pb, on the other hand, is higher in the exposed region compared to the surrounding regions ( Figure 1b 4 ). At first this may seem counter-intuitive, however consistent with the EDX images of I and C, the increase of the Pb signal just denotes an increase of the concentration of Pb with respect to the other elements, I and C. Whether this is due to the concomitant formation of I 2 and Pb 0 [29] or collapse of the tetragonal structure into PbI 2 due to the removal of MA + and I − , [30] goes beyond the aim of this work. Indeed, the reduced concentration of I and C in the illuminated region can further enhance the Pb signal in EDX imaging because it reduces the reabsorption of Pb-emitted X-rays by MA + and I − .
Most importantly, this correlation between PL and EDX allows us to establish that ion (re)distribution can be driven either directly or indirectly (via thermal effects) by light and it correlates strongly to the changes in PL quantum yield. A reduction of PL yield correlated with a depletion of iodine can be rationalized by migrating ions increasing the defect concentration, which consequently increases the portion of nonradiative recombination. As also reported in several publications, [22,26,27] we suspect that it is the halide specie having the lowest activation energy, which migrates out from the laser excited spot forming the defects. A recent report by Motti et al. shows that a reduction in PL yield after light soaking relates to an increment in the halide interstitial concentration which can trap holes. [31] Moreover, the formation of metallic Pb has been shown to introduce deep traps in the bandgap of MAPbI 3 introducing nonradiative recombination centers. [32] It is also worth noting that the reduction of C signal in EDX might be related to formation of MA + vacancies which are known to diffuse slowly under light exposure. [33] Identifying the type of point defects generated and set in motion by intense light exposure requires additional measurements stretching out of the scope of this work and is therefore the focus of a follow up study.
Tracking Ion Migration via Spatially Resolved Photoluminescence
To further understand the effect of microstructure on the ion migration, we compare MAPbI 3 samples in form of polycrystalline thin films of different grain sizes and a crystal grown by inverted crystallization [34] (fabrication details are given in the Experimental Section). Having established that we can induce ionic defect migration with a focused laser beam, we subject the samples to a measurement protocol in which the excitation beam is either focused into a spot (full width at half maximum, FWHM, of 2.5 µm) in the center of the field of view, or spread across the entire field of view spanning an area up to 40 µm diameter (defocused). A detailed description of the experimental setup and measurement protocol can be found in Note S1 (Supporting Information). Figure 2a schematically illustrates the two modes also showing the corresponding excitation spot sizes. In both excitation modes, the excitation power is kept constant, which means the excitation density in the focused mode increases by over two orders of magnitude compared to the defocused mode. Irrespective of whether the excitation is focused or defocused, the PL emission from the sample is collected from the entire field of view (40 µm diameter) of the sample at a frame rate of 20 Hz. This provides a unique opportunity to not only detect the spatially evolving PL signal from the area of the sample which is not directly excited in the focused mode, but also detect any changes in the PL as a result of focusing the laser (comparing defocused PL images before (PL 0 ) and after focusing the laser (PL 1 )-see inset of Figure 2d-e). Figure 2b,c shows a spatially resolved time evolution of PL from the thin film and a freshly cleaved surface of the crystal in the focused mode excitation interval (a few selected frames over 6 s, see Video S1 for the thin film and Video S2 for the crystal). Since the redistribution of PL is radially symmetric with respect to the point of excitation, we plot the normalized PL intensity as a function of the distance d from the center of the laser spot (d = 0 µm). The radial distribution of PL intensity extracted at each time instant (PL(d,t)) is normalized to the initial PL distribution (PL(d,t = 0)) measured just as the focused excitation is turned on. For both samples, a reduction of the normalized PL (values below the red dotted line) is observed for d < ≈ 2 µm. We note that this correlates well with the region where the focused excitation laser directly strikes the sample (blue line). Overall, the PL signal of the film and crystal shows relevant qualitative Adv. Energy Mater. 2020, 10,1903735 differences both in the shape and variation with time. In the crystal, the PL diminishes over a broad range (broader than the excitation spot) upon excitation with the focused beam. On the contrary, PL in the film decreases only in the laser spot; outside this region PL increases. Moreover, while for the crystal PL(d,t) keeps decreasing with time during the first 6 s of the measurement, in the film PL(d,t) continuously increases.
An interesting feature for both samples is that PL is detected at distances far from the excitation point. Charge carrier diffusion lengths in metal halide perovskites have been Adv. Energy Mater. 2020, 10,1903735 reported to be on the order of a few µm, [35,36] which may certainly explain our observations. Another contributing factor may also be photon recycling. [37] We are reluctant to attribute this to PL caused by direct absorption of photons from the tail of the focused excitation source primarily for the reason that the PL profiles for neither sample (gray traces) resemble that of the focused excitation profile (blue traces). Regardless of how the PL far from the focused excitation spot is generated, it is more importantly established that there are changes in the spatially distributed PL yield evolving over seconds (slow time scale), which we attribute to ionic defect redistribution. This is in agreement with a study by Li et al., [23] in which PL changes are also observed in space in a MAPbI 3 thin film due to an applied electric field-induced ionic redistribution.
Although we have strong evidence for light inducing ionic redistribution, we refrain from proposing by which mechanism this occurs at this stage. In a recent report, light-driven ion migration has been attributed to photochemical processes involving various rates of trapping and detrapping of charge carriers interacting with halide defects. [38] Alternatively, it has been also proposed that light can induce local electric fields causing ionic defect migration, although different explanations of how that field arises have been proposed including photoinduced Stark effect, [39] as well as an interaction between charge carriers and surface adhered superoxide species. [40] The comparison between the PL images before (PL 0 ) and after (PL 1 ) focusing the excitation can provide information on lasting changes (in minutes or hours) in the spatially distributed PL intensity caused by the focused excitation. For a simplified comparison between samples, we extract the relative change in PL intensity , again as a function of radial distance (d). In Figure 2d, we demonstrate that ΔPL shows a significant reduction in the range between d = 1 and 4 µm (red shaded area) for the thin film. Comparing this to the FWHM of the focused laser distribution profile (blue traces), this reduction of PL coincides with the region receiving the direct excitation from the focused laser. The insets show the PL images before and after the focused excitation where the PL reduction is observed. A value of ΔPL < 1 indicates an increase of defectmediated nonradiative recombination, which is associated with a redistribution of ions and subsequent defect formation as discussed above. Subjecting the freshly cleaved crystal to the same measurement procedure results in a noticeably larger reduction of ΔPL observed at a greater radial distance compared to the thin film (Figure 2e), which is also evident in the PL images (see insets in Figure 2e). At a radial distance of d = 12 µm, ΔPL = 1 for the thin film (no lasting effect from focused excitation), as opposed to the crystal case, where ΔPL ≈ 0.3. We, therefore, conclude that there is stronger resistance to the spatial redistribution of ionic defects in the thin film compared to the crystal.
Grain Boundaries as an Energy Barrier for Ion Migration
As shown in Figure S2a (Supporting Information), the average grain size of the thin film shown in Figure 2 is ≈200 nm, which is approximately one order of magnitude smaller than the diameter of the focused excitation spot (≈2.5 µm). Therefore, the laser spot directly impinges on an area containing several grains, which is not the case for the crystal (see Figure S2b in the Supporting Information). Hence, we prepare a thin film containing large grains, which serves as an intermediate scenario between the small grain thin film and the single crystal. SEM micrograph demonstrates that this "big-grain film" has grains with sizes equivalent to, or even larger than, the focused laser excitation spot (see Figure S2c in the Supporting Information). As such, we can focus the excitation into a single grain while observing the effect on the neighboring grains that are not directly excited. Here, we depend on SEM micrographs to define the size of the grains. We acknowledge that these can vary from what is determined by SEM, [41] and that a group of grains can be mistaken for a single one using this technique. [42] However, this study emphasizes the difference in the GB density in the three samples with notably different microstructure (further optical and morphological characterization of the samples can be found in Notes S2 and S3 in the Supporting Information). Hence, we are confident that adequate information can still be drawn from relative differences between SEM analyses. In Figure 3, we isolated a relatively large grain (≈5 µm diameter) and subjected it to the same measurement procedure as for the thin film and crystal. We did not correlate SEM micrographs with PL measurements but could identify the grain boundaries in the PL microscope as regions of particularly increasing PL intensity for the first 30 s of exposure to light. This photobrightening at GBs can be explained by an intense light-induced healing of defects, [43] that, in comparison to the bulk of the grain where the defect density is lower, occurs at a faster rate. In Figure 3a, the PL map is shown in defocused mode (PL 0 ) prior to focusing the excitation, where the large grain is highlighted with a dashed yellow line. When the excitation is focused into the grain (blue spot), we observe from the normalized PL images that emission is coming strictly from the grain that is directly excited (Figure 3b) and not from outside its GB. This points to GBs either efficiently mediating nonradiative recombination or that charge carriers are simply deflected, which has been previously proposed. [5,44] As the focused excitation remains, we observe similar spatial redistribution of PL to what occurs in the single crystal (see Video S3 in the Supporting Information), establishing the light-induced ionic defect migration. As the excitation is switched back to defocused mode (PL 1 ), we can confirm that PL remains significantly reduced exclusively for the grain that is excited (Figure 3c). This becomes more obvious in Figure 3d when plotting ΔPL from a cross-section of the field of view (indicated by the red line in Figure 3c), where not only a strong reduction of PL is observed for the excited grain, but also that PL remains largely unaffected outside the boundaries of the large grain. Thus, this intermediate case strengthens the hypothesis that GBs introduce barriers for ion migration.
We further fabricated devices in n-i-p architecture using thin films of different grain sizes to examine the effect of ionic defect dynamics in working solar cells. Figure 3e shows the current density-voltage curves of devices employing two distinct grain size distributions, in which the "big grain" device has average absorber grain size of 400 nm compared to an average grain size of 200 nm for the "small grain" device. The devices have fairly similar performance where the small-grain and www.advancedsciencenews.com big-grain devices show short circuit current density of 21.1 and 21.3 mA cm −2 , open circuit voltage of 1074 and 1095 mV, and fill factor of 66.4% and 68.0% respectively (forward scan), resulting in 15.0% and 15.8% power conversion efficiencies. The small-grain device exhibits a slight hysteresis compared to the big-grain device which might indicate a difference in the ionic response time. Furthermore, the devices were subjected to 0.1 V bias to track the current transient behavior under illumination ( Figure 3f). As expected, the big grain device has a faster current response with less variance under a steady bias. Thus, in agreement with a previous report by Correa-Baena et al., [19] the microstructure of the film can be linked to the device's behavior, in which the electronic transient on long time scales has been attributed to the ionic double layers introduced by ionic defect migration. [11,45]
Dark Recovery
Having established that the light-induced ion migration is inhibited by GBs, we study the effect of allowing the sample to rest in the dark after light soaking, which has for PSC devices been reported to restore performance. [11] For both the crystal and the big-grain film, where a noticeable ion migration is detected over large areas, we observe the self-healing (recovery of PL) occurring at different time scales (Figure 4a) which is also observed by several reports, [27,46,47] and is attributed to ionic defect migration. [48] The main difference in the recovery between the two samples here are the time scales and the extent to which recovery occurs (Figure 4b). Complete recovery occurs for the crystal within 1 h, while the recovery for the big-grain film is still incomplete after 12 h. We note that for the big-grain film, recovery starts from the GB perimeter and progresses to the center of the grain, where the PL yield remains low even after 12 h. The longer recovery time seen along the perimeter of the grain might be due to either a higher intrinsic defect concentration and/or an intragrain microstructure evolution constraint, which prevents the restoration of the initial state. We propose that the ions that have been driven toward, and possibly accumulated or "kinetically trapped" at the GBs, can return and "heal" only the damaged part in its proximity.
It is important to note here that we observe a morphology change in the big grain film ( Figure S3, Supporting Information) and similarly in the small grain film (Figure 1) after the light exposure. More importantly, the fact that the PL yield can recover implies that constituent ions did not leave the samples after excitation, but rather redistribute in the sample. We attribute the recovery to ionic defect annihilation, which leads to a reduction of nonradiative recombination, and thus, recovery of PL yield.
Atomistic simulations support the hypothesized scenario. Classical molecular dynamics (MD) makes it possible to simulate large-scale models of MAPbI 3 including point-defects and GBs. Interatomic forces of MAPbI 3 can be modeled by the MYP Adv. Energy Mater. 2020, 10,1903735 force field developed by Mattoni et al. [49,50] which have been successfully applied to study vibrations and thermodynamic properties of MAPbI 3 , degradation in water [51,52] as well as diffusion of point-defects. [6] Here, we apply MD to simulate the diffusion of one iodine vacancy (i.e., the most mobile point defect in MAPbI 3 [6] ) in presence of the Σ5/(102) grain boundary, i.e., a prototypical boundary in MAPbI 3 forming along the (102) crystallographic plane with 53.1° tilt angle. [53] The calculated dynamics shows that the mobility of the iodine vacancy is strongly reduced by the presence of boundaries. Figure 5a shows the position-time plot of the vacancy position within a grain. The trajectory within the atomistic model is also represented in Figure 5b. The vacancy is initially placed at the center of a crystal grain annealed at 400 K for 0.5 ns. We chose this temperature due to higher diffusivity of ions at higher temperature. Note that the diffusion mechanism is unchanged between 400 and 300 K, whereas higher temperature shortens the simulation time. When the defect is far from the boundary, it diffuses randomly through stochastic Adv. Energy Mater. 2020, 10,1903735 jumps induced by temperature. Accordingly, the mean square displacement increases with time and the defect reaches the GB after 0.25 ns, at a distance of ~6 nm from the initial position. Hereafter, the distance of the defect from the boundary remains (essentially) constant until the end of the simulation, indicating a trapping of the defect at the GB. These findings provide theoretical evidence that GBs can trap defects, thus reducing the overall diffusivity of iodide defects in polycrystalline films.
The physical origin of the trapping can be explained in energetic terms. Grain boundaries are regions of extended defects, typically presenting a local disorder (e.g., coordination defects, strain, etc.) with a corresponding excess of local energy. Accordingly, the energy of a defect at a GB is lower than in the bulk of the crystalline grain. This is confirmed by the potential energy of the vacancy as a function of the position along the polycrystalline system reported in Figure 5c, bottom panel. In practice, we compute the potential energy of the system in which we placed an iodine vacancy at different positions along the x direction of the polycrystalline sample after a local relaxation (dashed line in Figure 5b). The energy profile shows local minima at both GBs (PbI-terminated (blue) and MAI-terminated (green)), indicating that these positions are energetically favored with respect to bulk crystalline regions. In order for the defect to escape from the boundaries it is necessary to overcome an energy barrier ΔE † ≈ 0.5 eV (1 eV) for the PbI-terminated (MAI-terminated) boundaries. This result is in agreement with the finite temperature dynamics discussed above during which the vacancy is easily captured at the MAI-terminated boundary and never released on the timescale of the simulation.
Present experimental and MD results bring us to the following conceptual scheme: i) When the film is excited by the laser, the defect concentration (V I • I′ i and possibly the corresponding MA defects) increases ( Figure 1). ii) Defects can migrate toward and get trapped by the absolute energy minimum (MAI-terminated GB), or toward the local minimum (PbI-terminated GB) because of Brownian-like random dynamics, in which defects jump from site to site due to thermal fluctuations (Figure 5a,b). A net force associated with the energy profile shown in Figure 5c makes the random jumps asymmetric, which result in a net attraction toward the grain boundaries, where they get trapped. iii) When the perturbation, here the focused laser, ceases its action, the system tends to restore the equilibrium defect concentration by annihilating the excess defects (see Figure 4). iv) This requires that complementary defects get detrapped from the grain boundaries, meet and annihilate. In other words, the recovery time is determined by the detrapping time, which, following the transition state theory, [10,13,54] depends exponentially on the barrier τ = ℏ/k B T exp[ΔE † /k B T], with ℏ Planck constant, k B Boltzmann constant, and T temperature. Thus, a recovery time of minutes/hours, as experimentally observed in Figure 4, corresponds to a barrier of ≈0.9/1.1 eV, which is similar to the predictions drawn from the herein shown MD simulations.
Conclusion
In this study, we relate signatures of ion migration to the microstructure of MAPbI 3 and studied the effects, as well as kinetics, of ionic defect migration by PL microscopy. By analyzing the lateral evolution of PL intensity in thin films and single crystals induced by a focused excitation beam, we conclude that grain boundaries inhibit ion movement. The change in PL yield stems from migrating iodide and possibly methylammonium ions that are likely to saturate corresponding vacancies. At the same time, the PL yield reduction comes from the removal of ions from their crystalline sites, which introduces nonradiative recombination centers. This reduction can last for minutes and hours; however, this process can be partially or fully reversible depending on the microstructure. The recovery of PL confirms the possibility of defects being trapped at GBs proximity, which can migrate back to defective crystalline sites to "heal" the lattice upon cessation of the excitation. The experimental findings are supported by molecular dynamics simulations, confirming the trapping of the iodine vacancy at the grain boundaries. The trapping is explained by the presence of potential energy minima for defects at the grain boundaries. The slow recovery process in the dark can be explained by the presence of energy barriers that defects have to overcome in order to detrap from GBs, after which they can migrate in the crystalline bulk of the grain to encounter the complementary defect and annihilate. We expect that our findings will also help explain the issues faced with long term stability of perovskite solar cells since mobile ionic defects have been shown to play a vital role in degradation mechanisms. [31]
Experimental Section
Samples Fabrication: The glass substrates were cleaned with Mucasol (2%), acetone, and isopropanol in ultrasonicator for 15 min, respectively. Then the substrates were dried with N 2 gun and cleaned in an UV-O 3 cleaner for another 15 min. The cleaned substrates were immediately transferred to a glovebox (N 2 atmosphere) to fabricate the perovskite thin films. The perovskite solutions were made of stoichiometric PbI 2 (Tokyo Chemical Industry, 98% purity) and CH 3 NH 3 I (Dyenamo, 99% purity) in a mixed solvent ratio 6:1 of N,N-dimethylformamide (Sigma Aldrich, anhydrous, 99.8%) and dimethyl sulfoxide (Sigma Aldrich, anhydrous, 99.8%). To make the big-grain films, 2% of PbI 2 was replaced by Pb(SCN) 2 . The solution was shaken at 60 °C for 5 min to dissolve all components. 100 µL of perovskite solution was dropped on cleaned substrates, then the following spin coating program was used: 20 s at 4000 rpm with ramping steps for 2 s to 1000 rpm then 3 s to 4000 rpm. 5 s before the end of the program, 500 µL ethyl acetate (Sigma Aldrich, anhydrous, 99.8%) was dropped on the substrates to form a compact film. Immediately after the spin coating, wet perovskite films were annealed at 100 °C for 1 h. To make the crystal, stoichiometry PbI 2 (Tokyo Chemical Industry, 98% purity) and CH 3 NH 3 I (Dyenamo, 99% purity) were dissolved in γ-butyrolactone (ReagentPlus, ≥99%). Subsequently, the solution was heated to 150 °C for 3-8 h to form crystals. The inverted crystallization method to grow crystal was adopted from Saidaminov et al. [34] All the chemicals were used as received.
Solar Cell Fabrication: The ITO (In-doped SnO 2 ) substrates were cleaned with the same procedure as above. The cleaned ITO substrates were then coated with SnCl 2 (2 mg mL −1 in ethanol) by using 4000 rpm for 30 s spin coating program. The wet layers were annealed at 180 °C for 1 h. Then the substrates were transferred to N 2 filled glovebox to deposit perovskite layer as mentioned above. Notably, to make small grain devices, the perovskite layer was annealed at 60 °C for 10 min then 50 min at 100 °C whereas big grain devices were annealed at 140 °C in first step. Following the perovskite layer, Spiro-OMETAD was used as hole selective layer in which 36.15 mg of Spiro-OMETAD was dissolved in 1 mL of chlorobenzene, doped with 14.40 µL 4-tert-butylpyridine (Sigma Aldrich, 98%), 8.75 µL of bis(trifluoromethane)sulfonimide lithium salt (Li-TFSI) (99.95% trace metals basis, Sigma Aldrich) (300 mg mL −1 of acetonitrile), and 14.50 µL FK209 (Co(II) salt, Sigma Aldrich) (500 mg mL −1 of acetonitrile). Finally, 80 nm of Au (Alfa Aesar, 99.99% purity) was evaporated on top at less than 1 Å s −1 rate to finish the device.
Photoluminescence Spectroscopy: A schematic of the photoluminescence spectroscopy setup is shown in Figure S1 (Supporting Information). The measurement was performed in a homebuilt inverted microscope based on the Olympus IX-71 body. For excitation, the 458 nm line of a CW Argon laser was employed. The only exception is for Figure 1 in which we used a 450 nm diode laser (Thorlabs CPS450). The excitation was either focused or collimated at the back aperture of the objective (Olympus LUCPlanFL 40, NA 0.6) with the use of a collimating lens where the former yielded a wide-field excitation spot ("defocused mode") and the latter yielded a focused spot ("focused mode"). All the data shown in the study were obtained from measurements which were carried out in ambient conditions. In the Supporting Information similar phenomenon of redistribution of PL was demonstrated in space for a single crystal (see Video S2 in the Supporting Information for measurement in air and Video S4 in the Supporting Information for measurement in N 2 ).
Time-resolved photoluminescence (trPL) measurements were carried out in a home-built setup with an excitation wavelength of 660 nm from a pulsed supercontinuum laser light source (SuperK Extreme) operating at 304 kHz repetition rate. The spot size was 25-35 µm in diameter and the pulse fluence of 10-30 nJ cm − ² was chosen in order to generate an equivalent number of charge carriers as would be expected under 1 sun conditions (1.5E+21 photons m −2 s −1 ). PL was collected panchromatically and the decay was recorded using time-correlated single photon counting with a PicoHarp TCSPC Module by PicoQuant.
Scanning Electron Microscopic and Energy-Dispersive X-Ray Spectroscopy: The SEM/EDX images were acquired with Hitachi S4100 at 30k magnification. The voltages used for SEM and EDX were 5 and 12.5 keV, respectively.
Classical Molecular Dynamics: The model of a polycrystal with Σ5/102 twin boundaries was obtained by i) cutting an orthorhombic crystal of MAPbI 3 with (102) surfaces; ii) generating a replica by a mirrorsymmetry about one of the surfaces; iii) merging the two crystals after a relative shift aimed at optimal match of atoms at the boundary; and iv) applying periodic boundary conditions. The 4032-atom model obtained by this procedure was first optimized by conjugate gradient forces minimization, then heated to 300 or 400 K and annealed for 0.3 ns. Vacancy was generated by removal of one iodine atom and its position identified by calculating atomic coordination. The vacancy trajectory and diffusion was studied during 0.5 ns constant number of particles, pressure, and temperature (NPT) dynamics at 400 K and 1 bar.
The energy profile was obtained by i) choosing an atomic configuration equilibrated at 300 K; ii) selecting the iodine atoms along a linear region orthogonal to the boundaries (see Figure 5); iii) placing one vacancy at each of the selected positions; iv) optimizing positions and energy by forces minimization; and iv) collecting all data as a function of position. The profile was obtained by a local running average. All simulations were performed by using the LAMMPS code. [55]
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2020-04-23T09:13:36.144Z | 2020-04-19T00:00:00.000 | {
"year": 2020,
"sha1": "b3abdd065cc509b2a90e2b90019a37c46219784c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aenm.201903735",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "06c4d6c615a4a1d117e7414a166cbd61ad1521ff",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
181832440 | pes2o/s2orc | v3-fos-license | A new Dead Sea pollen record reveals the last glacial paleoenvironment of the southern Levant
The southern Levant is a key region for studying vegetation developments in relation to climate dy- namics and hominin migration processes in the past due to the sensitivity of the vegetation to climate variations and the long history of different anthropogenic occupation phases. However, paleoenvir- onmental conditions in the southern Levant during the Late Pleistocene were still insuf fi ciently understood. Therefore, we investigated the vegetation and fi re history of the Dead Sea region during the last glacial period. We present a new palynological study conducted on sediments of Lake Lisan, the last glacial precursor of the Dead Sea. The sediments were recovered from the center of the modern Dead Sea within an ICDP campaign. The palynological results suggest that Irano-Turanian steppe and Saharo-Arabian desert vegetation prevailed in the Dead Sea region during the investigated period (ca. 88,000 e 14,000 years BP). Nevertheless, Mediterranean woodland elements signi fi cantly contributed to the vegetation composition, suggesting moderate amounts of available water for plants. The early last glacial was characterized by dynamic climate conditions with pronounced dry phases and high but unstable fi re activity. Anatomically modern humans entered the southern Levant during a climatically stable phase (late MIS 4 e MIS 3) with diverse habitats, constant moisture availability, and low fi re activity. MIS 2 was the coldest phase of the investigated timeframe, causing changes in woodland composition and a widespread occurrence of steppe. We used a biome modeling approach to assess regional vegetation patterns under changing climate conditions and to evaluate different climate scenarios for the last glacial Levant. The study provides new insights into the environmental responses of the Dead Sea region to climate variations through time. It contributes towards our understanding of the paleoenvironmental conditions in the southern Levant, which functioned as an important corridor for human migration processes.
Introduction
The study of the paleoclimate and its influences on environments is essential not only to understand current and future climate changes (Masson-Delmotte et al., 2013) but also to reconstruct the history of mankind. The Levant in the southeastern Mediterranean region is a possible meeting point of anatomically modern humans (AMH) and Neanderthals, where gene flow between the two hominins may have occurred (Kuhlwilm et al., 2016). The Levantine fossil record provides evidence for the first migration of AMH out of Africa (Hershkovitz et al., 2018) and for later hominin migration processes towards Eurasia (Mellars, 2011). A renewed migration of AMH out of Africa, probably leading to the occupation of Europe, took place during the last glacial. Earliest fossil evidence dated to ca. 55 ka BP (kilo years before present; all radiocarbon dates in this paper have been calibrated) originates from Manot Cave, Israel (Hershkovitz et al., 2015). In addition, the extinction of Neanderthals occurred during the last glacial, namely between 45 and 30 ka BP, leading to ongoing discussions about the causes (Shea, 2008). Among others, climatic causes have been postulated for AMH migration and Neanderthal extinction processes (e.g., Shea, 2008;Müller et al., 2011).
However, there are ongoing discussions on the hydroclimatic conditions in the southern Levant during the last glacial. While the majority of climate records from the Eastern Mediterranean agree on cold and arid conditions during the last glacial (e.g., Fleitmann et al., 2009;Langgut et al., 2011;Müller et al., 2011;Pickarski et al., 2015), most studies from the Dead Sea region suggest more humid conditions compared to today. Indications for an increased glacial wetness were provided by various studies dealing, for instance, with lake-level reconstructions, geochemical compositions of sediments, and speleothem activities. Lake-level reconstructions of Lake Lisan, the precursor of the Dead Sea, indicate major lake-level high stands during the last glacial of up to ca. 240 m above typical Holocene levels (e.g., Bartov et al., 2002;Torfstein et al., 2013b). During the highest stands, Lake Lisan even merged with the Sea of Galilee (Bartov et al., 2003;Hazan et al., 2005;Torfstein et al., 2013b), being separated today by more than 100 km. The sediments that deposited during the occurrence of Lake Lisan are mainly comprised of aragonite-detritus laminae (e.g., Begin et al., 1974;Katz et al., 1977;Machlus et al., 2000). Aragonite formation requires an increased input of freshwater to the lake to provide bicarbonate to the Ca-chloride brine (Stein et al., 1997;Barkan et al., 2001). Therefore, the high lake levels and aragonite deposition might indicate increased precipitation rates during the last glacial (Stein et al., 1997;Torfstein et al., 2013b). Increased glacial wetness is also suggested by the deposition of speleothems in areas that were too dry for speleothem growth during the Holocene (Vaks et al., 2003(Vaks et al., , 2006Bar-Matthews et al., 2017).
Despite its outstanding role in terms of archeology and paleoclimate, little was known about the paleovegetational conditions in the southern Levant during the last glacial. The study of fossil pollen and other palynomorphs contained in sediments enables the reconstruction of past environments, particularly the paleovegetation and paleoclimate (Faegri and Iversen, 1989). However, a detailed palynological study based on an independent chronology was still missing for the last glacial Dead Sea region.
Previous palynological studies from the Levant were either based on marine or terrestrial sediments. Marine studies were conducted in the Levantine Basin and suggest cold and dry glacial conditions (Cheddadi and Rossignol-Strick, 1995;Langgut et al., 2011). However, the pollen assemblages reflect a huge pollen source area given the basin size and were influenced by African vegetation due to the Nile outflow (Langgut, 2018). A terrestrial pollen record encompassing the whole last glacial was obtained from Yammouneh Basin, Lebanon (Gasse et al., , 2015. This study also indicates warm and wet interglacials and cold and dry glacials. However, given the high altitude of the Yammouneh Basin, the vegetation might have been influenced by water deficiencies due to water storage as ice or frozen soils . The same orographic climate effect might have affected the Birkat Ram area on the Golan Heights, where palynological results for the last 30 ka suggest a similar vegetation pattern (Schiebel, 2013). Strikingly, a last glacial sequence from the Sea of Galilee, located in the Jordan Valley below sea level, also indicates less water availability for plants compared to the Holocene (Miebach et al., 2017). Pollen records from the Hula Basin, northern Israel, might also encompass large parts or even the whole last glacial period (e.g., Horowitz, 1979;Weinstein-Evron, 1983). However, problems with radiocarbon ( 14 C) dating (e.g., Meadows, 2005;van Zeist et al., 2009) and UraniumeThorium (UeTh) dating (Weinstein-Evron et al., 2001) of sediment cores from Hula Basin make a convincing correlation to other records difficult. Major chronological uncertainties also occur in a sediment core from the Ghab Valley, northwestern Syria. The pollen sequence suggested the spread of Mediterranean forests during marine isotope stages (MIS) 3 and 2 (Niklewski and van Zeist, 1970). However, Rossignol-Strick (1995) revised the chronology to a late glacial and Holocene age. Horowitz (1992) presented a vegetation model for the Dead Sea region during the Late Quaternary based on several low-resolution pollen sequences. The correlation to climate records was based on few age determinations and the following hypothesis: during times corresponding to even numbered MIS, woodland vegetation predominated (e.g., MIS 4 and 2), while periods corresponding to odd numbered MIS were characterized by the dominance of steppe vegetation (e.g., MIS 3 and 1) and/or desert vegetation (e.g., MIS 5 and 1). Therefore, further knowledge is required about the vegetation history of the Levantine lowlands based on a terrestrial high-resolution pollen record with a robust chronology.
The study of microscopic charcoal can provide valuable insights into the history of fire activity and the relationship between the paleovegetation, anthropogenic activities, and fire regimes (Whitlock and Larsen, 2001). However, almost none of the last glacial palynological studies from the Levant investigated microcharcoal in addition to pollen. Exceptions are studies from the Ghab Valley (Yasuda et al., 2000) and the Hula Basin (Turner et al., 2010) dating to the late glacial and Holocene. However, large uncertainties in the radiocarbon chronologies of the sediment cores from the Ghab Valley and the Hula Basin (e.g., Meadows, 2005;van Zeist et al., 2009) make the timing of events and the correlation to other records speculative. Still, these records provide comparisons between changes in fire activity and vegetation because charcoal and pollen originated from the same sediment sequences. The fire history during earlier times of the last glacial and particularly at the Dead Sea remained unknown.
Here, we present a new palynological study conducted on sediments of the last glacial Lake Lisan (Lisan Formation). The investigated sediments were recovered in the framework of the International Continental Scientific Drilling Program (ICDP) from the central Dead Sea (Stein et al., 2011a, b) and have an independent chronology based on 14 C and U-Th dating (Neugebauer et al., 2014;Torfstein et al., 2015;Kitagawa et al., 2017). The palynological study provides new and detailed insights into the last glacial vegetation and fire history of the southern Levant in relation to climate changes. We test previous hypotheses on the paleovegetation in the Dead Sea region, namely the coincidence of woodland expansion and retraction with marine isotope stages (Horowitz, 1992). We evaluate climate scenarios for the last glacial Levant regarding their influence on the biome distribution using a biome-climate transfer function and the accordance with the new palynological dataset. In addition, we draw conclusions about the paleoenvironmental setting for occupation phases of AMH and Neanderthals.
Dead Sea
The Dead Sea is situated in the southern Levant bordering Israel, Jordan, and the Palestinian Territory West Bank (Fig. 1). It is a terminal lake and is primarily fed by the perennial Jordan River but also by groundwater and several streams, which are mainly ephemeral, i.e., they experience occasional flashfloods. The total drainage area comprises 42,200 km 2 (Greenbaum et al., 2006). The Dead Sea occupies the lowest continental depression on Earth (currently ca. 430 m below mean sea level (m bmsl)). With a surface area of about 760 km 2 (76 km in NeS direction, up to 17 km in WeE direction), it is the largest lake in the region (Litt et al., 2012). A sill separates a northern deep basin with a water depth of about 300 m from a shallower southern basin. While the water depth of the northern basin has been steadily declining during the last decades due to human impact, the southern basin is nowadays occupied by evaporation ponds (Greenbaum et al., 2006). The Dead Sea is a hypersaline water body with a salinity of ca. 27.5% (ca. 340 g/l), i.e., the salinity is multiple times higher compared to seawater (Gavrieli and Stein, 2006).
The Dead Sea is located at the Dead Sea Transform, a tectonic boundary between the Sinai and Arabian plates. The Dead Sea Basin is the largest and oldest of several pull-apart basins, which originated along the transform during the plate motion process (Garfunkel, 2014). Since its formation in the early Miocene, the Dead Sea Basin subsided continuously and acted as a major sediment trap (Garfunkel, 1997). During the late Neogene, the valley was filled with water coming from the Mediterranean Sea and forming the marine Sedom Lagoon (Stein, 2014). After the disconnection of the Sedom Lagoon from the Mediterranean Sea, the Dead Sea Basin was occupied by a series of lakes. One of them was Lake Lisan, which occurred during the last glacial. While chronostratigraphic analyses at the shore indicate an age of ca. 70 to 15 ka BP for the duration of Lake Lisan (e.g., Torfstein et al., 2013a), the new ICDP core drilled at the deepest part of the Dead Sea suggest that the Greenbaum et al. (2006). transition to Lake Lisan occurred ca. 15e20 ka earlier (Neugebauer et al., 2016).
Climate and vegetation
The southern Levant is a transition zone of different climate regimes (Fig. 2). The northern part is characterized by a Mediterranean climate with hot, dry summers and mild, wet winters. The precipitation is mainly brought by Mediterranean mid-latitude cyclones, e.g. Cyprus Lows Goldreich, 2003). The southern part is occupied by a dry, subtropical desert (Kushnir et al., 2017). Here, precipitation arrives mainly as flash floods by the tropical Active Red Sea Trough (Dayan and Morin, 2006). While precipitation rates generally decrease towards the south and with lower elevations, temperatures generally decrease towards the north and with higher elevations. Seasonality increases towards the east (Goldreich, 2003). The Dead Sea itself lies in a hyperarid area with 50e100 mm mean annual precipitation (Greenbaum et al., 2006).
The natural vegetation of the southern Levant is primarily shaped by the precipitation distribution but also by temperatures and soils (Fig. 2;Zohary, 1962Zohary, , 1982Danin and Plitmann, 1987;Danin, 1992). The northern part is characterized by the Mediterranean biome with arboreal climax communities. Mediterranean woodland reaches southward to the Judean Mountains and along the upper slopes of the Rift Valley east of the Dead Sea. The area receives 350e1200 mm mean annual precipitation (Zohary, 1962). Common trees are deciduous Quercus (mostly Q. ithaburensis and Q. boissieri; Q. libani at high elevations), evergreen Quercus (Q. calliprinos), Pistacia spp., Ziziphus spp., Rhamnus spp., Ceratonia siliqua, Phillyrea latifolia, Styrax officinalis, Arbutus andrachne, and several Rosaceae species. They are accompanied by conifers such as Pinus halepensis and Juniperus spp. (Baruch, 1986 and references therein;Danin, 1992). Irano-Turanian steppe occupies areas with ca. 100e350 mm mean annual precipitation. It is characterized by herb and dwarf-shrub communities dominated by Artemisia herbaalba. Saharo-Arabian desert vegetation occurs in the southern part, where the mean annual precipitation falls below 100 mm. It is a vegetation type with sparse plant cover and low diversity. Important representatives of the Saharo-Arabian vegetation are Amaranthaceae. Sudanian vegetation occupies tropical oases of the Jordan Valley. Mainly trees and shrubs such as Maerua crassifolia, Acacia tortilis, Balanites aegyptiaca, and Ziziphus spina-christi compose this vegetation type (Zohary, 1962).
Drilling campaign and stratigraphy
The Dead Sea Deep Drilling Project (DSDDP) under the auspices of the ICDP was intended to gain the first long, continuous, and high-resolution sediment sequence from the Dead Sea. The drilling campaign took place in 2010/2011 (Stein et al., 2011a, b). We analyzed sediment samples from the deepest borehole (core 5017-1-A, site 5017-1, N 31 30 0 28.98 00 , E 35 28 0 15.60 00 ) with a total drilled length of 455.34 m. It is located at the center of the northern basin ( Fig. 1). In this study, the sediment depth of the analyzed samples encompasses 199.07e92.35 m. The investigated interval belongs to the Lisan Formation (Neugebauer et al., 2014(Neugebauer et al., , 2016. 76% of the Lisan Formation of the 5017-1-A core are mass transport deposits (MTDs), showing disturbed, slumped, and homogenous sediment sections. The remaining sediments are composed of laminated alternating aragonite and detritus (aad facies) with some gypsum laminae (Kagan et al., 2018). We only sampled the latter facies, resulting in sampling gaps of different length.
Chronology
The current chronology of the investigated sediment sequence is based on a linear interpolation of published 14 C and UeTh dates (Table 1). Twelve 14 C dates were conducted from terrestrial plant remains of core 5017-1-A (Neugebauer et al., 2014;Kitagawa et al., 2017). We calibrated the 14 C dates using the calibration dataset IntCal13 (Reimer et al., 2013) within the software OxCal 4.2 (Ramsey, 2009). UeTh dating was performed on four samples of primary aragonite from core 5017-1-A (for more details see Torfstein et al., 2015). In addition, Torfstein et al. (2015) correlated a massive gypsum deposit in the 5017-1-A core to UeTh dated counterparts in exposed margins of the Dead Sea Basin (Torfstein et al., 2013a). To account for the high number of MTDs in the Lisan Formation, we constructed an event-free age-depth model following Kagan et al. (2018). MTDs larger than 50 cm (Fig. 5) were excluded from the model. Debrites (homogenites, turbidites, and breccias) were completely removed, whereas slumps were considered to include some original laminae. Thus, only 2/3 of the slump thickness was excluded. According to the age-depth model, the investigated sediment section of this study encompasses ca. 88e14 ka BP.
Palynological analyses
Pollen preparation of 203 sediment samples with a sample volume of mostly 4e6 cm 3 was processed following a standard protocol described by Faegri and Iversen (1989). The chemical treatment included 10% hydrochloric acid (HCl) for 10 min, 40% hydrofluoric acid (HF) for 48 h, 10% hot HCl for 10 min, glacial acetic acid (C 2 H 4 O 2 ), hot acetolysis with 1 part concentrated sulfuric acid (H 2 SO 4 ) and 9 parts concentrated acetic anhydride (C 4 H 6 O 3 ) for a maximum of 3 min, and C 2 H 4 O 2 . Sieving and ultrasonic sieving were carried out to remove particles coarser than 200 mm and particles finer than 10 mm, respectively. Lycopodium tablets with a known number of spores were added to each sample to calculate pollen, non-pollen palynomorph, and micro-charcoal concentrations (Stockmarr, 1971). Samples were preserved in glycerol and were stained with safranin.
Palynomorphs were identified with a Zeiss Axio Lab.A1 light microscope with the help of palynomorph atlases and keys (Reille, 1995(Reille, , 1998(Reille, , 1999Beug, 2004) as well as the pollen reference collection of the Institute of Geosciences and Meteorology, University of Bonn. At least 500 terrestrial pollen grains were counted in each sample. Obligate aquatic plants were not included in the terrestrial pollen sum to exclude local taxa growing in the lake (Moore et al., 1991). Furthermore, destroyed and unknown pollen were excluded from the terrestrial pollen sum, which was used to calculate percentages of the pollen assemblage. Quercus pollen was grouped into two morphotypes. Evergreen Quercus pollen was separated from deciduous Quercus pollen according to the morphological features of the evergreen Q. ilex type (Beug, 2004). The pollen types were named after evergreen Q. calliprinos and deciduous Q. ithaburensis, respectively e today's most common species in the region e following the nomenclature rules by Birks (1973). Some pollen types were grouped to higher taxonomic levels in the summary pollen diagrams. Likewise, Alnus, Fraxinus excelsior type, Platanus orientalis, Salix, Tamarix, Ulmus/Zelkova, and Vitis were grouped to riverine trees and shrubs following van Zeist et al. (2009). Pollen count rations of Artemisia to Amaranthaceae (A/ A) and Quercus ithaburensis type to Amaranthaceae (Q/A) were calculated to evaluate their use as climate indicators (El-Moslimany, 1990;Zhao et al., 2012). Charcoal particles were divided into two size fractions with diameters of 25e100 mm and 100e200 mm. If the size fraction is not stated hereafter, the sum of both fractions is given.
A stratigraphically constrained cluster analysis using a square root transformation was performed by CONISS (Grimm, 1987) to aid pollen zonation. All taxa with more than 1% of the terrestrial pollen sum and the sum of trees and shrubs were used for the analysis. Pollen diagrams were prepared with the software Tilia, Version 2.0.41 ( © 1991e2015 Eric C. Grimm).
Biome modeling
The (Bayesian) biome model (Sch€ olzel, 2006;Litt et al., 2012; Table 1 Radiocarbon ( 14 C) dates of terrestrial plant remains and UraniumeThorium (UeTh) dates of Dead Sea core 5017-1-A with sediment depths. The median of calibrated ages is given for 14 C dates. AMS: accelerator mass spectroscopy. Ohlwein and Wahl, 2012) is generally used for probabilistic reconstructions of past climate states given a biome distribution. It includes the probability for the occurrence of each biome given the corresponding pollen spectra, the biome-climate transfer function (also called likelihood function), and a prior probability distribution. In this study, spatial variations of the biomes in the Levant e the Mediterranean, Irano-Turanian, and Saharo-Arabian biome e were modeled using climate information. Therefore, we estimated (Meusel et al., 1965;Thoma, 2017) used for biome modeling, b) original biome distribution in the Levant, c) modeled probability for the occurrence of biomes in the surrounding of Lake Lisan under changed climate parameters (the analyzed area is marked by the box in b), P A ¼ annual precipitation sum, T W/S ¼ mean summer and winter temperature. only the biome-climate transfer function in the biome model e the probability for the occurrence of a biome given a certain climate state. This probability was determined by using a quadratic discriminant analysis (QDA), where the probability for each biome is dependent on the other two biomes and where the probabilities are normalized to one. We used the CRU version 4.01 dataset comprising monthly terrestrial climate parameters for the period from 1901 to 2016. This global dataset was created by interpolating quality checked station data on a regular 0.5 Â 0.5 latitude-longitude grid (Harris et al., 2014). We used three climate parameters, namely annual precipitation, winter temperature (average December, January, February values), and summer temperature (average June, July, August values). The recent biome distribution is based on vegetation maps by Meusel et al. (1965) that were digitized on the same grid ( Fig. 3a and Sch€ olzel et al., 2002;Litt et al., 2012;Thoma, 2017).
Biome modeling was performed using the freely available R software (R Core Team, 2016). Further details of the model, including the mathematical description, were given by Ohlwein and Wahl (2012) and Litt et al. (2012).
Biome modeling
We performed biome modeling to explore the spatial implications of Levantine biomes in response to climate fluctuations. A series of plotted scenarios (Fig. 3c) draw clues about the climatic sensitivity of biomes in the surrounding of Lake Lisan (Fig. 3b). The model reveals major biome shifts in response to precipitation and temperature changes: I) The Mediterranean biome spreads with increasing precipitation, but only a temperature reduction of more than 4 C causes a decline of the Mediterranean biome; II) Decreasing temperatures trigger a spread of the Irano-Turanian biome, while precipitation changes do not necessary alter the Irano-Turanian biome distribution; III) The Saharo-Arabian biome shrinks under colder conditions and with increasing precipitation.
The model enables us to explore the spatial distribution of biomes under various climate scenarios for the last glacial period though does not account for changing boundary conditions such as atmospheric CO 2 and insolation variations. Given the existing pollen data of the region, those scenarios can be evaluated. Several studies have estimated the glacial temperature in the southern Levant based on proxy data or climate models. For MIS 3, those temperature estimations range between a reduction of ca. 2e5 C with a mean of ca. 3 C (~2 C (Stockhecke et al., 2016) . Despite ongoing discussions on glacial precipitation in the southern Levant (e.g., Torfstein et al., 2013b), few studies are available that quantify the amount of precipitation for the last glacial. Enzel et al. (2008) and Vaks et al. (2006) suggest a doubling of annual precipitation based on rates of aragonite deposition in Lake Lisan and speleothem growth in the northern Negev. Rohling (2013) and Barkan et al. (2001) even assume a 5-fold and 6-fold precipitation rate compared to today, respectively. In contrast, Stockhecke et al. (2016) suggest a reduction of annual precipitation for the last glacial Levant, i.e., ca. 15% during MIS 3 and ca. 30% during MIS 2 (Stockhecke et al., 2016: Fig. 12).
In addition, various aspects of seasonality changes were discussed for the last glacial climate. Such are, for example, an increased precipitation seasonality (Prentice et al., 1992), a decreased precipitation seasonality (Orland et al., 2012), an increased frequency of floods (Ben Dor et al., 2018), and a seasonal moisture deficit due to increased snowfall (Robinson et al., 2006). While we assess total precipitation changes in the biome model, the current biome model does not account for seasonality changes. Still, we discuss the influence of glacial seasonality changes on plants in section 4.3.4.
The modeled biome distribution given the different climate scenarios (Fig. 4bee) illustrates the variability of recent hypotheses in the literature. All vegetation patterns differ considerably from today's modeled biome distribution with unchanged climate parameters (Fig. 4a). The reduction of temperature particularly favors the Irano-Turanian biome, which replaces the Saharo-Arabian biome over a wide area. In some areas such as the Yammouneh Basin, the temperature decline triggers a limitation of the Mediterranean biome due to orographic effects such as frozen soils and snowfall (cf. Develle et al., 2011). Precipitation changes have major impacts on the Mediterranean biome. While an increased precipitation facilitates the spread of the Mediterranean biome, particularly southeast of the Dead Sea, precipitation declines reduce the probability of the Mediterranean biome occurrence.
The biome model outputs are discussed regarding their coincidence with regional pollen data in sections 4.3.3 and 4.3.4.
Pollen zonation
The results of the palynological investigation are summarized in Table 2 and Figs. 5 and 6. Five pollen assemblage zones (PAZs) were defined according to the cluster analysis (Fig. 5). The Roman numeral II follows the hierarchical classification of pollen assemblage superzones after Tzedakis (1994) for a better overview of future synthesis pollen records from the whole Dead Sea profile 5017-1 (see also Chen and Litt, 2018
. General remarks and interpretations
The vegetation in the Dead Sea region was generally dominated by open vegetation during the last glacial period (ca. 88e14 ka BP; Figs. 5 and 6). Large pollen amounts of Amaranthaceae (amaranth family including the former goosefoot family Chenopodiaceae), Artemisia, and Poaceae (grasses) indicate the widespread occurrence of herb and dwarf-shrub communities. Amaranthaceae are the main representatives of the Saharo-Arabian desert biome, which nowadays receives very low annual precipitation values of 100 mm or less (Zohary, 1962;Litt et al., 2012). The plant family contains many drought-adapted and salinity-tolerant species (Rossignol-Strick, 1995;van Zeist et al., 2009). Artemisia is the dominant plant of today's Irano-Turanian steppe biome (Zohary, 1962) and tolerates aridity though less extreme than Amaranthaceae (Rossignol-Strick, 1995). Artemisia is adapted to somewhat higher precipitation rates of ca. 100e350 mm (Zohary, 1962). Modern studies showed that Artemisia pollen increases and Amaranthaceae pollen decreases with decreasing aridity. Therefore, A/A ratios can be used as a moisture indicator, particularly in primary non-forested areas (El-Moslimany, 1990;Zhao et al., 2012). The expansion of Irano-Turanian steppe and suppression of Saharo-Arabian desert under reduced temperatures, as shown by the biome model (Figs. 3 and 4), implies that increased A/A ratios can also be indicative of reduced temperatures. Poaceae are also associated with the Irano-Turanian steppe biome (Litt et al., 2012), particularly humid steppes (van Zeist and Bottema, 2009). Yet, its various species also admix into a range of vegetation types (Danin, 1992. Poaceae comprise of a wild pollen type and a Cerealia Table 2 Pollen assemblage zones (PAZs) with depths, ages, mean spatial and temporal resolution of samples, main components of pollen assemblages (arboreal pollen (AP) and nonarboreal pollen (NAP) with mean percentages), pollen concentration (PC), and definition of lower boundaries (LB 6. Pollen diagram of Dead Sea core 5017-1-A plotted against age with selected taxa, pollen concentrations, ratios of Artemisia to Amaranthaceae and Quercus ithaburensis type to Amaranthaceae with dots marking dry events, charcoal concentrations, and marine isotope stages (MIS) after Lisiecki and Raymo (2005). The chronology is based on a linear interpolation of calibrated radiocarbon dates (triangles), UraniumeThorium dates (squares), and a correlated UraniumeThorium date (star). pollen type, which can morphologically be distinguished. While the wild type refer to the majority of wild grasses, the Cerealia type contain mainly domesticated cereals and their ancestors, which are native to this region (Beug, 2004).
Trees and shrubs never dominated the Dead Sea region during the investigated period. Nevertheless, they contributed substantially to the pollen composition (averagely 25.5%). Quercus ithaburensis type (deciduous oak) was the most abundant arboreal pollen type, followed by Juniperus type (juniper and/or Mediterranean cypress), Quercus calliprinos type (evergreen oak), and Pistacia (pistachio). While deciduous and evergreen oaks are usually well represented to overrepresented (van Zeist et al., 1975;Rossignol-Strick, 1995), Pistacia and Juniperus type are usually underrepresented in the pollen precipitation (Rossignol-Strick, 1995;van Zeist et al., 2009). All of these trees and shrubs represent the Mediterranean biome (Baruch, 1986;Litt et al., 2012), which occurs nowadays in the most humid areas of the southern Levant (Zohary, 1962). Modern Levantine vegetation studies indicated that a reduction of arboreal vegetation coincides with a decline in moisture (Zohary, 1962;Kadmon and Danin, 1999). Likewise, the Mediterranean biome retreats under reduced precipitation rates in the biome model (Figs. 3 and 4). This relation has also been described and discussed by Litt et al. (2012) based on botanical-climatological transfer functions by using a Holocene Dead Sea pollen record. Therefore, the amount of arboreal pollen is strongly related to changes in available moisture for plants in the past, i.e., effective moisture, which depends mainly on precipitation and evapotranspiration (the sum of evaporation and plant transpiration). The Q/A ratio e directly comparing the most abundant components of the Mediterranean and Saharo-Arabian biomes e also mainly reflects moisture changes. In addition, temperature reductions can limit the Mediterranean biome distribution, particularly at high altitudes (Figs. 3 and 4). Still, other factors such as seasonality, local habitats, and insolation must be considered for evaluating vegetation changes through time.
Pollen concentrations generally indicate vegetation density and pollen productivity in the lake catchment. However, Dead Sea samples that contain gypsum show low pollen concentrations, as indicated in Fig. 5, because gypsum laminae have higher sedimentation rates than alternating aragonite-detritus laminae (Stein, 2001;Torfstein et al., 2015). Still, the overall decreasing trend in pollen concentrations along the record suggests a stepwise vegetation density reduction, particularly at the MIS 5/4 transition and at ca. 35 ka BP.
The analysis of charred particles in sediments is one of the primary methods to reconstruct fire activity, i.e., fire frequency and/ or fire intensity, in the past. The comparison with pollen assemblages allowed insights into the relationship between fire, vegetation, and climate (e.g., Swain, 1973;Daniau et al., 2010;Vanni ere et al., 2011). Charcoal particles above 100 mm in size are not transported far from fire sources. Thus, they indicate local fires. Smaller charcoal particles are able to travel longer distances and therefore reflect regional fires (Whitlock and Larsen, 2001). Charcoal concentrations in the investigated Dead Sea sediments vary from 0 to 7457 particles/cm 3 (1476 particles/cm 3 on average). Still, the highest charcoal concentrations are several times lower than during the last interglacial (Chen and Litt, 2018). Charcoal particles <100 mm constitute 84.6% of the charcoal sum and indicate the prevalence of regional fires.
Different fire regimes are generally generated by atmospheric conditions, ignition agents, and the availability of consumable resources, i.e., vegetation (Moritz et al., 2010). According to Whitlock et al. (2010), the highest fire activity is usually related to grassland and savanna biomes. Daniau et al. (2010) reviewed changes in fire regimes during the last glacial on a global scale and concluded that they were primarily related to changes in plant productivity. Also previous studies from the Mediterranean and Near East that investigated the fire history during the last glacial connected enhanced fire activity to higher arboreal pollen percentages and an increased terrestrial biomass caused by higher temperatures and increased moisture (e.g., Daniau et al., 2007;Turner et al., 2010;Pickarski et al., 2015). In contrast, the results from the Dead Sea suggest that forest density, grassland occurrence, and thus availability of fuel was not the primary trigger for changes in fire activity because there is neither a significant linear correlation between charcoal concentrations and arboreal pollen percentages nor with Poaceae percentages. Hence, other climate variations are possible factors that primarily influenced the Dead Sea charcoal record. Such climate variations could be, for instance, longer/more intensive summer droughts (Vanni ere et al., 2008 (umbellifers). Moderate pollen frequencies of Quercus ithaburensis type and small pollen amounts of Pistacia, Juniperus type, and Quercus calliprinos type indicate the occurrence of Mediterranean woodland elements. Trees and shrubs did probably not occur in a closed forest belt but were patchily distributed in habitats with locally more available moisture. They most likely formed a mosaic with Irano-Turanian steppe components.
The charcoal record indicates a rapidly fluctuating fire frequency with an increasing trend (Figs. 6 and 7). Highest charcoal concentrations of the investigated period appear in this PAZ. The charcoal record supports unstable environmental conditions with a high but fluctuating fire activity.
Fluctuations in the vegetation indicate millennial-scale climate oscillations with four phases of reduced available moisture expressed by low A/A and low Q/A ratios (Figs. 6 and 7). They strikingly coincide with the deposition of gypsum (Figs. 5 and 7;Neugebauer et al., 2014). Neugebauer et al. (2016) correlated dry phases during the early last glacial derived from micro-facies analyses from the Dead Sea core 5017-1-A to cold phases in the North Atlantic, coinciding with stadial conditions (mostly cold and dry) in other Mediterranean records. Following this interpretation, the detected dry phases might correlate to Greenland stadials ( Fig. 7; Rasmussen et al., 2014), i.e., pollen would show a Dansgaard-Oeschger signature. However, the current chronology does not allow a convincing correlation to single climate events of other paleorecords. Thus, the connection between rapid vegetation changes in the Dead Sea region and high latitude climatic conditions during this phase remains speculative.
The environment during late MIS 4 to MIS 3 and implications for modern humans
PAZ II3 (168.05e130.99 m; 62.6e34.7 ka BP) corresponds to late MIS 4 and early/middle MIS 3. The onset of PAZ II3 marks the strongest shift in the vegetation during the investigated time, as indicated by the cluster analysis (Fig. 5). Artemisia pollen percentages increase while the pollen frequencies of a range of other herbs and dwarf shrubs, namely Amaranthaceae, Poaceae, Tubuliflorae, Plantago, Liguliflorae, Brassicaceae, and Rumex, decline. The change in non-arboreal pollen is also indicated by increased A/A values (Fig. 6). Moreover, the onset of PAZ II3 displays an increase of arboreal pollen. Overall highest mean percentages of Quercus ithaburensis type occur in PAZ II3. The pollen composition mirrors a spread of Artemisia steppe and Mediterranean woodland components dominated by deciduous oak, implying an increase of available moisture for plants compared to earlier phases. PAZ II2b (130.99e114.56 m; 34.7e30.6 ka BP) corresponds to late MIS 3. Its vegetation composition and abundance of single taxa largely resemble PAZ II3. However, Artemisia percentages further increase in PAZ II2b. The abundance of Artemisia is not only associated with an increase of effective moisture but can also be indicative for reduced temperatures. The complete absence of thermophilous Olea europaea (olive tree) in PAZ II2b e although never represented by high numbers in the pollen record e supports the latter explanation. A reduction of temperatures since ca. 35 ka BP coincides with the study by Ayalon et al. (2013) suggesting conditions too (Gasse et al., , 2015, B) AP and Artemisia/Amaranthaceae ratios with dashed trend lines and dots marking dry events from Dead Sea (this study), C) charcoal concentrations in particles/cm 3 with dashed trend lines from Dead Sea (this study), D) gypsum deposits in Dead Sea core 5017-1-A (Neugebauer et al., 2014), E) Dead Sea lake-level reconstructions (Waldmann et al., 2009;Torfstein et al., 2013b) with today's lake level (dashed line), F) speleothem growth periods from Mizpe Shelagim Cave, Mt. Hermon (above; Ayalon et al., 2013) and several caves west of the Dead Sea (below; Vaks et al., 2003;Lisker et al., 2010), G) south Levantine hominin record (Shea, 2008;Hershkovitz et al., 2015), H) precipitation anomaly for the Levant expressed as percental deviation from total mean (today has a value of ca. 18%) with average in dark grey (Stockhecke et al., 2016), I) Greenland ice core isotope record with numbered Dansgaard-Oeschger events above and numbered Heinrich events below , and I) June and December insolation for the 30 th parallel north (Berger and Loutre, 1991). Marine isotope stages (MIS) refer to Lisiecki and Raymo (2005). cold for speleothem growth at Mt. Hermon, northern Israel, between ca. 35 and 16 ka BP (Fig. 7). During late MIS 4 and MIS 3, both factors e relatively high moisture availability and cool conditions e probably played an important role for the environment.
The charcoal record supports an environmental shift around 63e62 ka BP. The fire activity became low and stayed low until the late glacial. A possible trigger could be weaker/shorter summer droughts due to cooler and/or moister summer conditions.
Our observation of high effective moisture during late MIS 4 and MIS 3 is supported by the deposition of speleothems in areas that were too dry for speleothem growth during the Holocene, namely the rain shadow semi-desert and the northern Negev ( Fig. 7; Vaks et al., 2003Vaks et al., , 2006Lisker et al., 2010;Bar-Matthews et al., 2017), and the continuous deposition of aad laminae in the Dead Sea core 5017-1-A (lithology in Fig. 5; Kagan et al., 2018) indicating a positive freshwater balance (e.g., Stein et al., 1997). Therefore, we cannot support the hypothesis that MIS 3 was a dry phase, as suggested by several authors who correlated warmer phases (e.g., Holocene, MIS 3) at the North Atlantic with drier phases in the southern Levant (e.g., Bartov et al., 2003;Haase-Schramm et al., 2004;Stein et al., 2010). Likewise, our palynological results contradict a previous vegetation model (Horowitz, 1992) that suggested the reduction of woodland components in the Dead Sea region during MIS 3. This discrepancy might be ascribed to the nature of MIS 3. Although defined as a separate isotope stage in the marine realm, the benthic d 18 O values defining MIS 3 are much higher than during full interglacial stages such as MIS 5e and 1, and therefore they are rather similar to the full glacial MIS 4 and 2 (Lisiecki and Raymo, 2005). In addition, MIS 3 is not expressed as an interglacial as defined by Jessen and Milthers (1928) for terrestrial records.
To test which climate might have caused the described vegetation pattern with large proportions of steppe though still allowing enough moisture for Mediterranean woodland, we modeled the biome distribution in the Levant for MIS 3 given two different climate scenarios in the literature: a temperature reduction of 3 C (see section 4.1) with I) a precipitation decrease of 15% following Stockhecke et al. (2016) (Fig. 4b) and II) a precipitation increase of 100% following Enzel et al. (2008) and Vaks et al. (2006) (Fig. 4c). Both model outputs fit very well with vegetation reconstructions for the Yammouneh Basin ( Fig. 7; Gasse et al., 2011Gasse et al., , 2015, indicating a sparse herbaceous flora in nowadays forested areas of Lebanon. However, the biome model reveals much different vegetation patterns for the Dead Sea region. Scenario I suggests a wellmixed balance of the three biome types with an increased probability for the Irano-Turanian biome compared to today. In contrast, scenario II suggests a strong expansion of the Mediterranean biome, causing lower probabilities of the other two biomes. The pollen signal from the Dead Sea (Fig. 6) supports the first scenario due to high proportions of Irano-Turanian taxa and moderate amounts of Mediterranean taxa. The model indicates that a small reduction in precipitation together with reduced temperatures does still allow sufficient moisture availability for the growth of Mediterranean taxa in the Dead Sea region.
The comparison of the Dead Sea region to northern latitudes reveals a contrasting pattern during late MIS 4 and MIS 3. Greenland and Europe were influenced by frequent and rapid climate fluctuations (Fig. 7; Rasmussen et al., 2014), causing rapid expansion and contraction of arboreal vegetation over southern Europe (e.g., Fletcher et al., 2010). In comparison to that, the Dead Sea region saw more stable environmental conditions with very low fire activity, with steady woodland occurrence (lowest AP standard deviation of the record), and without pronounced dry phases resulting in constant moisture availability. Still, climate tracers such as the pollen ratios A/A and Q/A indicate some variations in the pollen assemblage that might refer to Dansgaard-Oeschger oscillations, as discussed above.
AMH occupied the southern Levant during this stable climate phase after an absence of several thousand years. Earlier dispersals from Africa into the southern Levant were dated to 194e177 ka BP based on fossils from the Misliya Cave, northern Israel (Hershkovitz et al., 2018), and to ca. 130e90 ka BP based on human remains from the Skhul Cave and the Qafzeh Cave, northern Israel (e.g., Valladas et al., 1988;Mercier et al., 1993;Grün et al., 2005). The fossils from the Misliya Cave provide the earliest evidence for AMH outside Africa (Hershkovitz et al., 2018). The oldest fossils from the investigated last glacial period date to 54.7 ± 5.5 ka BP and were excavated in the Manot Cave in northern Israel (Hershkovitz et al., 2015). Hershkovitz et al. (2015) suggested that those people could be closely related to the first AMH who eventually dispersed into Europe. Earliest evidence for the colonization of Europe by AMH dates to 45e43 ka BP (Benazzi et al., 2011).
AMH might have benefited from a humid phase between ca. 56 and 44 ka BP when crossing the nowadays arid regions between northeastern Africa and the southern Levant . After entering the southern Levant, however, AMH lived in a refugial area with favorable environmental conditions on longer time scales according to our study. The vegetation formed a mosaic of Irano-Turanian steppe with herbs and dwarf shrubs, open Saharo-Arabian desert vegetation, and Mediterranean woodland elements. This diverse landscape offered a large variety of habitats for animals and humans. No pronounced dry phases occurred, and water was constantly available in the ecosystem. Temperatures were certainly lower than today, but the continuous occurrence of frost-sensitive Pistacia indicates mild winters, at least at lower altitudes (Rossignol-Strick, 1995). The fire activity was constantly very low, suggesting weak/short summer droughts.
In contrast, Neanderthals had already occupied the southern Levant since the late MIS 5. Neanderthal remains from the Amud, Kebara, and Geula Caves were dated to ca. 81e42 ka BP (Shea, 2008 and references therein). Thus, they already lived in the southern Levant during a climatically dynamic phase with pronounced dry phases, higher fire activity, and intensified resource uncertainty.
To conclude, Neanderthals tolerated a wide spectrum of environmental conditions in the glacial Dead Sea region, and diverse and rather stable environmental conditions since ca. 63 ka BP provided great potential for the residence of AMH. (Rossignol-Strick, 1995). Together with the high abundance of Artemisia and the complete absence of Olea europaea, these changes in the pollen assemblage indicate the coolest phase of the investigated timeframe. This conclusion fits well with temperature calculations inferred from the Soreq Cave speleothems, central Israel (McGarry et al., 2004;Affek et al., 2008) and alkenone-based sea surface temperatures for the southeastern Levantine Basin (Almogi-Labin et al., 2009). Low pollen concentrations suggest a sparser vegetation compared to previous phases.
The constantly low fire frequency, as indicated by low charcoal concentrations, coincides with observations by Orland et al. (2012). They suggested a decreased seasonal rainfall gradient prior to 15 ka BP inferred from the Soreq Cave speleothems. After 15 ka BP and particularly during the Holocene, the climatic conditions were characterized by higher seasonality with distinct wet and dry seasons.
The comparison of pollen records from the Yammouneh Basin in Lebanon (Gasse et al., , 2015, the Birkat Ram maar lake on the Golan Heights (Schiebel, 2013), the Sea of Galilee in northern Israel (Miebach et al., 2017;Schiebel and Litt, 2018), and the Dead Sea (Litt et al., 2012; this study) allows an assessment on vegetation and climate pattern in the Levant during MIS 2. The Yammouneh Basin and the Birkat Ram are located in regions that are nowadays characterized by Mediterranean climate with a climax of dense Mediterranean woodland. The Sea of Galilee lies at the southern edge of the Mediterranean biome bordering the Irano-Turanian steppe. The Dead Sea is located in a hyperarid area, where Saharo-Arabian desert vegetation prevails. The surrounding mountains are covered by Irano-Turanian steppe vegetation and Mediterranean woodland. Strong gradients in precipitation, temperature, and the vegetation distribution between these localities occur nowadays ( Fig. 2; Zohary, 1962) and occurred during the Holocene (Fig. 8).
In contrast, Mediterranean forests were considerably reduced in the vicinity of the Yammouneh Basin, the Birkat Ram, and the Sea of Galilee during MIS 2. Arboreal components were more abundant in the Dead Sea region compared to the Holocene. However, a continuous and strong human impact, reducing the amount of trees and shrubs, has to be considered for the Holocene (Miller, 1991;Rollefson and K€ ohler-Rollefson, 1992), also given that mean arboreal percentages for the last interglacial optimum were more than twice as high as for the Holocene (Litt et al., 2012;Chen and Litt, 2018). Dwarf shrubs, grasses, and other herbs dominated the glacial vegetation in the whole study area, and there was no continuous and dense vegetation belt of the Mediterranean biome in the northern parts. Thermophilous trees were probably patchily distributed at moister habitats, particularly in the Jordan Rift Valley, where arboreal pollen percentages are somewhat higher than in the mountainous areas of the north.
The comparison of the described vegetation pattern with modeled biome distributions (Fig. 4) enables us to evaluate two different climate scenarios: a temperature reduction of 6 C (see section 4.1) with I) a precipitation decrease of 30% following Stockhecke et al. (2016) (Fig. 4d) and II) a precipitation increase of 100% following Enzel et al. (2008) and Vaks et al. (2006) (Fig. 4e). Both model outputs fit well with the pollen data from Yammouneh and Birkat Ram, suggesting the dominance of steppe and reduction of forest during MIS 2. However, the vegetation patterns differ greatly in the Dead Sea region: While scenario I triggers a similar distribution of the Mediterranean biome with lower probabilities of occurrence, scenario II suggests a spread of Mediterranean woodland into regions close to the Dead Sea that are nowadays arid. While scenario I slightly underestimates the proportion of Mediterranean woodland compared to the Dead Sea pollen data, scenario II clearly overestimates the distribution of the Mediterranean biome. Further studies are needed to estimate precipitation rates for MIS 2, which were probably slightly reduced but by not as much as 30%. In addition, precise temperature estimations, particularly for interior regions such as the Jordan Rift Valley, are needed because temperature also modifies the biome distribution.
Slightly lowered precipitation rates during MIS 2 could have still allowed relatively high available moisture for plants and a positive freshwater balance in the Dead Sea region. Reduced temperatures ( Fig. 7; Rasmussen et al., 2014), low summer insolation (Figs. 7 and 8; Berger and Loutre, 1991), and reduced catchment wide evaporation (due to lower temperatures and higher relative humidity; Bar-Matthews et al., 2017) would cause higher effective moisture compared to recent conditions. The total evapotranspiration would have been additionally lowered because of the low woodland cover in the northern mountain range, leading to reduced plant transpiration. This can have a significant impact on the water budget (Schiller et al., 2002(Schiller et al., , 2010Ungar et al., 2013). A lower basin wide evaporation and lower plant transpiration would have still allowed a positive freshwater balance in the lakes, leading to the increased lake levels even under reduced precipitation rates (Stockhecke et al., 2016). In addition, plant cover could have been shaped by available water in their habitats, while additional precipitation was stored as snow on high mountains (Robinson et al., 2006) and released as flash floods. The frequency of flash floods was considerably increased during rising Lake Lisan levels (Ben Dor et al., 2018). Plants could probably not sufficiently use the water brought by rapid snowmelts in springs and rapid drains during flash floods. Moreover, plant growth could have been limited by reduced atmospheric CO 2 levels during the last glacial, as previous studies suggested (e.g., Cowling and Sykes, 1999;Prentice et al., 2017). According to experimental studies, low CO 2 can result in decreased plant fitness and increased water use, especially of C3 plants (Gerhart and Ward, 2010 and references therein). Therefore, a reduction of forests and a shift from C3 plant to C4 plant dominance were ascribed to low CO 2 levels in other regions (e.g., Levis et al., 1999;Harrison and Prentice, 2003). However, a correlation of woodland retreat and a spread of C4 plants (e.g., many Amaranthaceae species) with gradually decreasing atmospheric CO 2 levels during the last glacial (Petit et al., 1999) cannot be detected in the Dead Sea pollen record. Therefore, the impact of CO 2 on the Levantine vegetation composition seems to be limited. Still, decreasing CO 2 levels could explain the gradually shrinking pollen concentration that indicates a declining vegetation density.
Preliminary insights into the late glacial environment
PAZ II1 (95.99e92.35 m; 15.4e14.2 ka BP) corresponds to part of the late glacial. The phase begins with a pronounced peak of Amaranthaceae and a small peak of Asteraceae (Tubuliflorae and Liguliflorae). Simultaneously, most other taxa drop. Thereafter, the composition and abundance of taxa resembles PAZ II4 with a diverse herbaceous flora and low amounts of Artemisia. It is probable that the initial peaks of Amaranthaceae and Asteraceae reflect a local spread of pioneer plants, engendering a temporal overrepresentation in the pollen assemblage and a statistical suppression of other taxa. A spread of local plants is supported by a simultaneous major lake-level drop of Lake Lisan from ca. 260e465 m bmsl, one of the lowest stands during the Late Quaternary ( Fig. 7; Stein et al., 2010;Torfstein et al., 2013b). The exposed shores could have been vegetated by saline-tolerant pioneer communities including species of Amaranthaceae and Asteraceae, as modern observations suggest (Aloni et al., 1997). Due to the wind-pollination of Amaranthaceae, local stands of this family are particularly overrepresented in the pollen diagram. A similar peak was observed after a strong lake-level decline at the Sea of Galilee at ca. 24e23 ka BP (Miebach et al., 2017), and the same phenomenon probably played a role at the Dead Sea during the transition to the last interglacial period (Chen and Litt, 2018). Still, a short dry period, as suggested by the deposition of gypsum in the Dead Sea core 5017-1-A (Figs. 5 and 7; Torfstein et al., 2013b;Neugebauer et al., 2014), and the strong lake-level reduction of Lake Lisan ( Fig. 7; Torfstein et al., 2013b) most likely additionally triggered the spread of herbs and dwarf shrubs.
Frost-sensitive Pistacia and Olea europaea occur consistently again after a virtual absence of many millennia. This indicates a return to higher temperatures at least during winters (van Zeist et al., 1975;Rossignol-Strick, 1995). The reduction of Artemisia and Juniperus type, which were common during the cold MIS 2 including the Last Glacial Maximum, also imply higher temperatures compared to MIS 2.
Since ca. 16.2 ka, the fire activity increased again. A probable cause could be a higher seasonality with distinct wet and dry seasons. This explanation is in line with the study by Orland et al. (2012), suggesting an increased seasonality since the late glacial inferred from the Soreq Cave speleothems.
Summary and conclusions
The 5017-1 profile obtained by the DSDDP is the longest continuous sediment record from the Dead Sea Basin, enabling the detailed reconstruction of the southern Levantine environmental history (Neugebauer et al., 2014). The detailed vegetation history of the southern Levant in particular was still insufficient understood given the scarceness of long continuous lacustrine sediment sequences and major chronological uncertainties in many pollen records (e.g., Rossignol-Strick, 1995;Weinstein-Evron et al., 2001;Meadows, 2005). Here, we analyzed the pollen and microscopic charcoal assemblage of the Lisan Formation of the Dead Sea core 5017-1-A, spanning ca. 88e14 ka BP. Moreover, we performed biome modeling to assess shifts in biome distribution in response to climate variations. The biome modeling results illustrate the climate sensitivity of the regional vegetation and help to evaluate climate scenarios for the last glacial Levant.
The pollen record indicates a mixture of Irano-Turanian steppe, Saharo-Arabian desert vegetation, and Mediterranean woodland elements. Although changes in the pollen composition might not Fig. 8. Comparison of a north-to-south gradient in the Levant. Arboreal pollen (AP) with mean percentages shown as dashed lines from A) Yammouneh, Lebanon (Gasse et al., , 2015, B) Birkat Ram, Golan Heights (Schiebel, 2013; discontinuous sedimentation might occur between ca. 10 and 17 ka BP), C) Sea of Galilee, northern Israel (MIS 1: Schiebel and Litt, 2018;MIS 2: Miebach et al., 2017), and D) Dead Sea, Israel/Palestine/Jordan (MIS 1: Litt et al., 2012; MIS 2: this study). E) June and December insolation for the 30 th parallel north (Berger and Loutre, 1991). F) Lake-level reconstructions of the Sea of Galilee (Hazan et al., 2005), the Dead Sea, and its precursor Lake Lisan (Torfstein et al., 2013b). Marine isotope stages (MIS) refer to Lisiecki and Raymo (2005). provide clear evidence for variations in the precipitation amount, it indicates the availability of water for plants, i.e., the effective moisture. Decreased pollen ratios of AP/NAP, A/A, and Q/A indicate low effective moisture probably pointing to a Dansgaard-Oeschger signature. Hence, four dry phases occurred during the early last glacial (MIS 5b/a and early MIS 4), which coincide with the deposition of gypsum in Lake Lisan. A high and dynamic fire activity took place.
An increased proportion of Irano-Turanian steppe vegetation and Mediterranean woodland elements suggests consistently high effective moisture during late MIS 4, MIS 3, and MIS 2, when fire activity was continuously low. Biome modeling suggests that no precipitation increase is needed for such amounts of effective moisture. Lower insolation, reduced catchment wide evapotranspiration, and low temperatures were probably sufficient for a positive water balance. MIS 2 was the coolest period of the investigated timeframe, as indicated by a change in arboreal taxa.
The residence of anatomically modern humans in the southern Levant during the last glacial was supported by stable environmental conditions with relatively high effective moisture and low fire activity. In contrast, Neanderthals had already lived in a dynamic ecosystem with changing water availability and higher fire activity, thus tolerating a wide spectrum of environmental conditions.
The comparison of Levantine pollen records along a north-tosouth gradient indicates that, at least during MIS 2, there was no gradient of available water for plants comparable to the Holocene and today. An overall more similar, open vegetation occurred, and there was no dense Mediterranean woodland belt in the north. Scattered stands of thermophilous woodland components e particularly found at lower altitudes such as the Jordan Rift Valley e formed a heterogeneous landscape.
Major environmental changes occurred during the late glacial. However, further high-resolution analyses based on a robust chronology are needed to reveal the detailed vegetation and fire history.
The new palynological record contributes towards our understanding of the influence of long-term and short-term climate oscillations on the environment. New insights into the Levantine vegetation history help to reconstruct and evaluate the regional paleoclimate, which also carries implications for recent and future climate changes. Furthermore, the detailed knowledge of the environmental setting is essential to reveal the relationship between environmental developments and anthropogenic processes in the past.
Data availability
The palynological dataset related to this article is available on the PANGAEA database (https://doi.pangaea.de/10.1594/PANGAEA. 900564). | 2019-06-07T22:36:08.433Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "485c146447454a44eb338d5f14e986ad912252f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.quascirev.2019.04.033",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a1ff9f80a8e9aade9ac432713b7b11292482485b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
263933583 | pes2o/s2orc | v3-fos-license | Clinical and ultrastructural study after partial inferior turbinectomy
Summary We report clinical and histological results obtained after partial inferior turbinectomy (PIT), surgery indicated for the treatment of chronic nasal obstruction. Methods Twenty patients were divided into two groups submitted to PIT plus septoplasty and PIT alone. The patients were reassessed clinically and histologically by means of a biopsy of the regenerated areas in the inferior turbinates at two different times after PIT, i.e., after 8 to 12 months (group A) and after two years (group B). Results The clinical results proved to be satisfactory for the relief of nasal obstruction in group A and unsatisfactory in group B. However, better histological results with better recovery and epithelial differentiation of the regenerated mucosa of the inferior turbinates after PIT were observed in group B. Conclusion Surgery proved to be effective on a short-term but not on a long-term basis, and histological recovery did not accompany improvement of clinical signs and symptoms.
INTRODUCTION
Breathing well is a condition directly related to quality of life. A good breathing demands good permeability of the nasal airways, the physiological entry door of air flow. Chronic nasal obstruction is a symptom responsible for most patients' visit to otorhinolaryngologists in their daily practices. The evolution in Rhinology currently offers a wide array of options that allows for a better understanding and treatment of pathologies related to nasal obstruction, through new diagnostic and therapeutic weaponry. However, many times, such a task still poses a challenge. Turbinate hypertrophy and nasal septal deviation are the main causes of nasal obstruction. The association of septal deviation with inferior turbinate hypertrophy on the opposite side of the deviated septum, in a vicarious way, is a frequent finding in nose exams. Modern pharmacology offers a large number of options for clinical treatment of nasal obstruction due to turbinate hypertrophy, whatever the origin it may have (allergic, idiopathic, drug-related or others), and by immunology, in allergy cases. However, despite the fact of it still being a controversial issue, most authors agree that when clinical treatment is not enough to offer good nasal permeability, surgical treatment should be indicated 1,2 . Today, inferior turbinate hypertrophy can be corrected with total, partial or submucosal turbinectomies, and turbinoplasties, besides other procedures, such as electrocautery, cryosurgery, laser vaporization and new technologies, such as, somnoplasty or coblation 3 .
Partial inferior turbinectomy (PIT) is a relatively simple surgical technique, used at the Otorhinolaryngology Department at Hospital das Clínicas at Riberão Preto Medical School -USP (HC-FMRP) for more than 15 years. This study aims to: 1-contribute to the understanding of PIT clinical benefits offered to patients with chronic nasal obstruction caused by inferior turbinate hypertrophy, through the analysis of a group of patients on the earlier post-operative stage (8 months to one year), and another group on a later stage (after two years); 2-describe the ultra structural findings on the regenerated inferior turbinate mucosa after PIT, with emphasis on ciliary recovery; 3-establish a relationship between the clinical situation of these patients and the ultra structural findings on the re-epithelized inferior turbinate mucosa.
MATERIALS AND METHODS
The twenty patients selected were divided into two groups with ten patients per group.
Eleven females and nine males. The patients' ages on the date of reassessment varied from 12 to 57 years, with an average of 25 in group A and 23 in group B. The inclusion criteria for patients on the study were chronic nasal obstruction without response to clinical treatment, inferior turbinate hypertrophy (uni or bilateral, allergic or not), with or without septal deviation as causal agents. All patients were submitted to uni or bilateral PIT. Thirteen patients were also submitted to septoplasty (Killian or Cottle techniques), and the surgical result being considered satisfactory (centered septum or without significant deviations). PIT was preceded by adrenalin solution injection (concentration 1:100.000), followed by a medial dislocation of turbinates. The amount of excised tissue varied according to the degree of hypertrophy of the turbinate, with the removal of soft tissue (mucosa and lamina propria) and bone of the adjacent turbinate, basically all along the anterior-posterior extension of the free border. An angled scissors was used for excision. Exclusion criteria for candidates were: presentation of other diseases related to nasal obstruction, such as nasal polyposis, infectious rhinosinusitis detected at some point during assessments, alterations of nasal valve, hypertrophy of middle turbinate or bubble-like middle turbinate, cystic fibrosis or primary ciliary diskinesia, and also patients with tuberculosis, chronic renal disease, diabetes or immunodeficiency. Those patients using sympathomimetic vasoconstrictors in an abusive manner did not participate due to its possible effects on nasal mucosa. Also those patients with an incomplete pre-operative clinical assessment were excluded and those who missed the first postoperative return visits, with inadequate biopsy material for histological study or those who refused to take part in the study.
Group A: Ten patients, reassessed clinically and histologically (nasal biopsy), between eight and twelve months after PIT (10 months average) were considered short term post-operative.
Group B: Ten patients, reassessed clinically and histologically two years after PTT (25 months average) were considered medium term postoperative. On this assessment, patients were submitted to a new interview and complete otorhinolaryngological exam. The place chose for biopsy was between 2 to 3 cm posterior to the beginning of the anterior end of the operated inferior turbinate, on its medial face. In the bilateral PIT cases, we agreed to biopsy the left side.
The present study was approved by the HC-FMRP Medical Ethics Committed under protocol # 1000. Once removed, a sample of the inferior turbinate went through a set of stages in order to be prepared to undergo electron microscopy. Immersion on a bottle with glutaraldehyde fixation solution at 3% in phosphate buffer, kept in a thermal container at 4 degrees Celsius during two hours. Washed in phosphate buffer. Post-fixation with osmium tetroxide at 1%. Dehydratation on acetone in increasing concentration (30, 50, 70, 90 and 95%), changing the concentration at every 10 minutes, and in three sessions of 20 minutes for the 100% concentration. Araldite infiltration 6.005 with propylene oxide at 1:1 ratio during 48 hours.
Later on, the sample was included into the same pure resin for 72 hours at 60 degrees Celsius. The blocks obtained were trimmed and cut on an ultra microtome, and semifine cuts were obtained (0.5 micrometer) for study on optical microscopes. On this stage, the cuts were set up on slides, stained with toluidine blue 2%, on pH 12.0, in order to select the areas with adequate orientation for the ultra-fine cuts (60-70 nanometers). These were extended in copper bars, contrasted with uranyl acetate at 4% for 15 minutes and lead citrate 0.3% during 15 more minutes, and washed again. Finally, the material was examined and electromicrographed with a Phillips 208 transmission electron microscope. A histological analysis of biopsies was done by the same histopathologist.
Pre-operative results
On the pre-operative stage, we chose interview and physical exam data from the twenty patients included in this study, ten from each group. Figure 1 shows the statistical data (percentage) of nasal obstruction features, according to the laterality and duration of symptoms. Table 1 refers to the main symptoms that may be related to chronic nasal obstruction: itching, secretion, sneezing, hyposmia, oral breathing, headaches and subjective cacosmia, showing the number of patients affected in both groups. One patient from each group reported having snored as well. The prevalence of uni or bilateral inferior turbinate hypertrophy, and the presence or the absence of septal deviation detected at the physical exam (anterior and posterior rhinoscopy), are on Figure 2. Some patients used up to three different types of drugs to deal with nasal obstruction, and still had the indication for surgical intervention. It is worth saying that most patients reported to use prescribed drugs regularly. Figure 3 depicts the surgical procedures adopted for each group, with the statistical analysis.
Post-operative results
The clinical and histological results obtained on the post-operative stage are showed separately on groups A (short-term post-operative) and B (medium term post-operative). Table 2 corresponds to post-operative symptom findings, on groups A and B respectively, according to the patients' report on total or subtotal improvement (which were considered as good results), partial improvement, unaltered or worsening of symptoms. Nasal obstruction, the most important symptom, is represented graphically (Figure 4). We did not observe post-operative complica- tions, such as significant bleeding or infection. Figure 5 shows the physical exam findings (anterior and posterior rhinoscopy), in groups A and B, regarding mucosa trophyc features on the inferior turbinates. Alterations such as synechiae, polyps, rhinosinusitis, and nasal turbinate degeneration were not found. It is worth mentioning that the macroscopic aspect of the operated turbinates was very similar to those turbinates never operated. According to what it was established on the inclusion criteria for this study, the nasal septum in all patients (submitted or not submitted to septoplasty), on the post-operative stage, was centered or without a significant deviation. The frequency of the different epithelial types observed on postoperative biopsies, was listed on Figure 6, organized according to the differentiation degree of the epithelium (from less to more differentiation). In some samples, it was noticed up to two different epithelial types, with transition areas between two epithelia, that is why the total number of epithelial types described in each group is higher than 10. This type of classification can be easily observed on light microscopy. The ultra structural features of the cilia were observed on electronic microscopy, where we could verify that, when they are present, the cilium appears to be normal in its ultra structure (Figure 7). We did not observe any ciliary ultra structural alteration, such as short, composed or cilium with tubular alterations. The number of goblet cells observed in both groups was considered normal for the different epithelial types studied. Despite the fact that this was not the main aim of this study, some descriptions on the lamina propria were done. On the post-operative stage with the exception of one patient on group A, who mentioned the occasional use of sympathomimetic vasoconstrictor (considered non-abusive), the only nasal active medication reported by patients was nasal steroid sprays. In both groups (A and B), exactly half the patients in each group, used this type of drug, but none of them used it continuously. With regard to the
DISCUSSION
Chronic nasal obstruction represents the main reason for seeking medical help, and the most important complaint of patients, besides being also the most important target to be corrected with surgery. For most patients, nasal obstruction appeared to be severe on the pre-operative stage, since it was reported as constant in 75% of cases, besides affecting both nasal cavities in 90% of patients (groups A+ B together). Such seriousness is likely to be the important factor which led these patients to seek a specialist to relieve nasal obstruction. There was not significant difference between groups A and B, regarding nasal obstruction features on the pre-operative stage. Bambirra, in 19934, observed similar laterality incident of nasal obstruction, with 92.86% of patients having a bilateral complaint. The author also found similar reports regarding nasal obstruction duration, constantly present in 85% of cases.
Analyzing the results obtained in group A, we say that 90% evolved with total or subtotal improvement of nasal obstruction (considered a satisfactory result), and only one patient (10%) did not have post-operative alteration (improvement). In group B, the results were less satisfactory: 30% of patients reported a total or subtotal improvement, 30% partial and 40% without alteration of nasal obstruction. Despite the sample not being large enough for a wide statistical analysis, we could see evidence that PIT post-operative results for the relief of nasal obstruction is better in an earlier stage than after two years of surgery, when nasal obstruction comes back to affect a larger number of patients. It is worth mentioning that studies like this, which requires patient collaboration to come back on specific days, and mainly, the acceptance of submitting themselves to a nasal biopsy, present technical and ethical limitations towards obtaining a large sample [5][6][7][8] .
None of the patients complained of worsening in their nasal obstruction, however we should highlight that most of them already presented the same severe form on the pre-operative stage, with little or nothing to get worse.
Inferior turbinate hypertrophy was observed bilaterally on the pre-operative stage in 60% of patients (groups A and B). In group A, bilateral obstruction was present in 40% of patients, and in group B, in 80% showing a remarkable differentiation between both groups. Such findings could lead us to believe that a bilateral inferior turbinate hypertrophy has some negative influence on the post-surgical prognosis for patients in group B. In order to clarify this issue, we used the protocol assessment, where we could observe that three out of all the cases with total or subtotal improvement in group B, previously showed bilateral hypertrophy, and that, among the patients who showed unilateral hypertrophy, one showed partial improvement and another showed no alteration. Therefore, bilateral inferior turbinate hypertrophy did not appear to be an influencing variable on post-operative results.
Physical exam findings were associated to the post-operative claim on nasal obstruction. In group A, the group with the best results, only two patients showed inferior turbinate hypertrophy, one unilateral and the other one bilateral. Similarly, the worst results showed in group B, were related to a higher number of patients with hypertrophy: seven patients, three of them with unilateral hypertrophy and four with bilateral hypertrophy.
Despite being controversial, surgical treatment of inferior turbinate represents the best alternative for the relief of chronic nasal obstruction if the patient does not respond to clinical treatment. In the literature, we find several papers showing PIT efficacy, with advantages that have it at least as compatible (or better) than other current techniques for inferior turbinate surgical reduction 5 . Its easy execution and the possibility of good tissue removal are the greatest advantages, which also does not need to use larger devices or inherent costs of techniques such as laser vaporization or somnoplasty. The highest rate of scar formation and bleeding are reported as the main disadvantages for PIT 1 . However, in this study none of these findings appeared to be relevant, since we did not observe significant bleeding cases or excessive crust formation on the post-operative period. The use of oily substance associated to saline solution on the post-operative period could have contributed to less crust build up. Some authors also reported better results with PIT, in relation to other techniques such as, submucosal cauterization, cryotherapy and even the highly-praised inferior turbinoplasty 9,10 .
In order to evaluate PIT benefits, it is a must to observe which is the post-operative stage analyzed, as it has been showed in this study, the results may vary according to the period assessed. If we analyzed patients from both groups (A+B), together, we would observe 60% total or subtotal improvement, 15% partial improvement and 25% unaltered. Results of 100% improvement in nasal obstruction in 20 patients submitted to PIT, described by Missaka (1972) 8 , referred to reassessments up to the sixth post-operative month. Such results, which are similar to those obtained for group A (90% with good evolution), may have a relation with the fact that their reassessment happened in an earlier post-operative stage.
Elwany & Harrison (1990) 9 and Bambirra (1993) 4 reassessed their patients one year after PIT, obtaining 75% and 82.72% improvement in nasal obstruction, respectively. Comparing these results with our study, we could say they are compatible to those results obtained in group A, because it corresponds to an approximately similar post-operative period. However, regarding group B, we found descriptions such as Meredith's (1988) 10 , who reported better results with PIT in the medium term (86% nasal obstruction improvement) than in group B (30% satisfactory improvement and 30% partial improvement), in a little bit longer period of surgical reassessment. Coutiss & Goldwyn (1990) 11 also reported good results in longer periods of reassessment, with 72% improvement with PIT after 10-16 years (13 years average).
The reasons for better clinical results in group A when compared to group B deserve some considerations. We can imagine that a longer post-operative period, such as in group B, can be related to the time needed for the etiological factors of turbinate hypertrophy which are not controlled adequately (allergy, vasomotor factors, infections), to act causing another turbinate hypertrophy. Authors highlight the importance of continuous postoperative follow up in order to control these etiological factors 1,2 . In group B reassessment, most patients had not been back to the office regularly for more than a year. Most times, according to what it was said by several patients in this group, there was an improvement of nasal obstruction on the first post-operative stage, and thus, they stopped going back to the doctor's office or even received early medical discharge. Although it has not been our goal to explain all the factors involved in hypertrophy or re-hypertrophy of inferior turbinates, we believe several factors such as: genetic, environmental, allergic, infectious, drug-related, hormonal, bio-mechanics of the nasal cavity and others, may be involved in the re-hypertrophy genesis. Anyhow, we believe it is necessary to have an extended post-operative follow up after PIT, besides having good patient education on the real benefits the surgery can offer them.
Regarding histological alterations, we observe different degrees of metaplasia, mature or not, with different types of epithelium showed [5][6][7] . We can find more studies with light microscopy, which described mucosal structural alterations after PIT with similar variety of epithelial types 4,8 .
We did not find similar histological studies with electron microscopy after PIT, aiming to investigate ultra structural ciliary alterations. More than simply detecting the presence of ciliary epithelia on the regenerated mucosa, it was possible to determine the features of these cilia. We verified that whenever they are present, the cilia showed a completely normal ultra structure. None of the secondary ciliary alteration, reported by Jorissen (1996) 12 : composed cilia, projections or membrane loss, excess of citoplasmatic matrix, ciliary disorder, anomalies of the peripheral or central microtubules, were observed, that is to say, PIT did not appear to be a causal factor for secondary ciliary diskinesia due to ciliary ultra structural alteration. We can even observe a greater epithelial differentiation (and a smaller amount of mature metaplasia) in group B than in group A, with the presence of cilia in seven samples, four of which with ciliary cylindrical pseudostratified epithelium (breathing epithelium). This type of findings suggests that there is a total recovery of mucosa after PIT, including its ultra structural aspects, which are, according to authors such as Jorissen (1996) 12 , directly related to functional aspects of the nasal mucosa.
Thus, we verified that despite the patients in the more recent post-operative stage (group A), showing better clinical results for nasal obstruction than the later postoperative group (group B), histologically, patients form group B showed greater differentiation of structural and ultra structural elements of mucosa, with lower rates of epithelial metaplasia. According to Robbins et al. (1986) 13 , most of the times metaplasia is an undesired cellular alteration, although necessary. Besides the epithelial alterations, the lamina propria and its elements deserve being objects of other investigations in order to broaden the knowledge | 2018-04-03T02:58:48.763Z | 2006-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "35aa30d8d258d1f6dfe1d3a208b5a8ec78c75613",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s1808-8694(15)31016-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79d8a04768f51844a110a708dbde8b5382ac91e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236459627 | pes2o/s2orc | v3-fos-license | The Impact of the COVID-19 Pandemic and Lockdown on Mild Cognitive Impairment, Alzheimer's Disease and Dementia With Lewy Bodies in China: A 1-Year Follow-Up Study
Background: While the lockdown strategies taken by many countries effectively limited the spread of COVID-19, those were thought to have a negative impact on older people. This study aimed to investigate the impact of lockdown on cognitive function and neuropsychiatric symptoms over a 1-year follow-up period in patients with mild cognitive impairment (MCI), Alzheimer's disease (AD) and dementia with Lewy bodies (DLB). Methods: We enrolled consecutive patients with MCI, probable AD or DLB who were receiving outpatient memory care before the COVID-19 pandemic and followed-up with them after 1 year by face-to-face during the COVID-19 pandemic to assess changes in physical activity, social contact, cognitive function and neuropsychiatric symptoms (NPS). Results: Total 105 probable AD, 50 MCI and 22 probable DLB patients were included and completed the 1-year follow-up between October 31 and November 30, 2020. Among the respondents, 42% of MCI, 54.3% of AD and 72.7% of DLB patients had a decline in MMSE scores and 54.4% of DLB patients had worsening Neuropsychiatric inventory (NPI) scores. Patients with DLB showed a more rapid decline of MMSE than those with AD. Diminished physical activity and social contact might have hastened the deterioration of cognition and the worsening of NPS. Conclusion: Social isolation and physical inactivity even after strict lockdown for at least 6 months were correlated with accelerated decline of cognitive function and NPS in patients with AD and DLB.
INTRODUCTION
An outbreak of coronavirus disease 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), began in December 2019 in Wuhan, Hubei, China, and rapidly spread across the globe within approximately 3 months (1). The Chinese government acted immediately and put the whole country into lockdown starting from 23 January 2020 to 23 February 2020. From February to April, citizens were still asked to stay at home and limit outdoor activities. The World Health Organization declared the COVID-19 pandemic on 11 March 2020. Social distancing is practiced by canceling events and gatherings, closing public places, working at home, avoiding physical contact and implementing travel restrictions. In China, every citizen was given a permission card and only allowed to leave home every second day for a maximum of 30 min. As such, outside physical activities were extremely limited. In consideration of the elevated risk of infection and death in the elderly, experts especially reminded them to reduce outdoor activities (2).
Patients with cognitive impairment are mostly aged over 60 years, and their physical and mental health were directly and indirectly affected by the COVID-19 pandemic. As is well-known, physical inactivity is a modifiable risk factor for Alzheimer's disease (AD). Preliminary data indicate that physical activity (PA) levels decreased among older adults by 26.5% during the pandemic (3). Researchers may pay more attention to the impact of worsening cognitive function. Several studies have focused on neuropsychiatric symptoms (NPS) and mental health in elderly people and dementia patients (4,5). They showed that the COVID-19 pandemic had a wide negative impact on the mental well-being of older adults with and without dementia. However, most of those studies are case reports or cross-sectional studies with a small sample size. So far, no studies have followed dementia patients for at least 1 year during the COVID-19 pandemic and did not discuss the different change between different cognitive impairment subtypes. Mild cognitive impairment (MCI) usually has memory loss, but it is different from AD for it does not typically affect a person's ability to complete daily task. Some people suffered dementia with Lewy bodies (DLB) have both cognitive impairment and parkinsonism. And the prevalence of hallucination in DLB was higher than AD. The aim of our study was to investigate changes in cognitive function and NPS during the first year of the COVID-19 pandemic in patients with mild cognitive impairment, AD and dementia with Lewy bodies (DLB) and to find the similarities or differences between different dementia types. In addition, we explored the predictive factors at baseline or during lockdown periods, especially in terms of physical activity and social contact, for the worsening or improvement of those symptoms. We supposed that the cognitive and neuropsychiatric symptoms decline faster in DLB and AD than in MCI. And the decline of cognition and neuropsychiatric symptoms in the dementia group may relate to the decrease of physical activity and social contact.
Design and Participants
This was a 1-year longitudinal and observational study, which was conducted at the memory clinic in Tianjin Huanhu Hospital. We consecutively recruited a total of 214 participants [probable Alzheimer's disease (AD) = 130, mild cognitive impairment (MCI) = 56, probable dementia with Lewy bodies (DLB) = 28] who underwent evaluation of cognitive function and NPS from 30 September 2019 to 31 December 2019, before the COVID-19 pandemic. The diagnoses of probable AD and MCI were based on the National Institute on Aging and Alzheimer's Association (NIA-AA) criteria (6), and the diagnosis of probable DLB was according to the fourth consensus report of the DLB consortium (7). Blood tests, neurological examination, neuroimaging (including CT scans or MRI), and positron emission computerized tomography (including FDG-PET and amyloid PET) if necessary, were performed to make the diagnosis. All clinical diagnoses of dementia were made by consensus agreement of at least two experienced neurologists. The exclusion criteria included severe loss of vision or hearing, physical disability, lost to follow-up, newly occurring delirium, strokes, and life-threatening illness. At the 1-year follow-up we reassessed the neurological examination etc. and recorrected the diagnosis, excluded the participants who developed into dementia in MCI group or recorrected diagnosis as other disease in DLB and AD groups. Finally, 177 participants (probable AD = 105, MCI = 50 and probable DLB = 22) completed all of the evaluations of cognition and NPS face to face (Figure 1). All the information except cognition assessment was all collected from the caregivers of participants.
Assessment
Our study was designed to have two time points of observation. At baseline and 1-year follow-up, the neurologists recorded the clinical characteristics, lifestyle, medical history, disease history and medication use of the participants. Marital status was divided into married and unmarried and education level into low (completed 0 to 12 years) and high (13 years and above). Lifestyle behaviors include smoke and drink. A smoker was defined as an individual with a history of smoking ≥ 5 cigarettes per day for >2 years. An alcohol drinker was defined as an individual with a history of drinking an alcoholic beverage ≥ 1 time per week for >2 years. We set up a self-designed questionnaire with reference to a health survey (8), which combined questions on frequency, duration and intensity to form a summary index of weekly PA, weighted by intensity level. Social contact was measured by using a self-rated questionnaire reference to a cohort study that assessed the number and frequency of contact with relatives and friends (9). Baseline PA and social contact was evaluated before the COVID-19 pandemic, and 1 year later, neurologists interviewed the patients retrospectively to assess the change of PA and social contact during the January to April 2020 lockdown. Global cognition was measured with Mini-Mental Status Exam (MMSE) and Montreal Cognitive Assessment (MOCA). Neuropsychiatric inventory (NPI) was used to assess the frequency and degree of NPS and depression was evaluated by Hamilton depression rating (HAMD). Subjects with severe depression (HAMD > 30 scores) were excluded in our study. The capacity to perform activities of daily living (ADL) was assessed with the ADL scale and daytime sleepiness was assessed by Epworth sleepiness scale (ESS).
Statistical Analysis
Clinical characteristics and disease history are described in means ± standard deviation for continuous variables and numbers (percentages) for categorical variables in three groups of gender, age, disease duration, clinical dementia rating (CDR) stage, education level, marital status, comorbidities and medication use. The normality of the distribution was analyzed using the Shapiro-Wilk test. Descriptive statistics were used to assess differences among MCI, probable DLB and AD groups by using an analysis of variance (ANOVA) followed by Bonferronicorrected pairwise comparison test for continuous variables and by using chi-squared test, Pearson's test and Fisher's exact test for categorical variables. The comparison of means at baseline and 1 year was made by paired-samples Student's t test. The comparison between categorical variables was performed by using chi-squared, Pearson's test and Fisher's exact test. Analysis of correlation was made by using the Pearson's or Spearman's correlation, as appropriate. A multiple linear regression analysis was used to find possible risk factors for worsening MMSE and NPI scores in AD and DLB patients during the COVID 19 pandemic. Analysis of correlation was made by using the Pearson's r or Spearman's ρ, as appropriate. All of the data analyses were performed with SPSS Statistics 25.0.
This study was designed and conducted in accordance with the Declaration of Helsinki and written informed consent was signed by all participants.
Baseline Patient Characteristics in Three Groups
The baseline characteristics of the patients with MCI, AD and DLB are shown in Table 1. The age of the patients in the DLB group was significantly higher than that of the MCI group, but not significantly different from the AD group. Notably, the prevalence of sleep disturbance was much higher in DLB compared to AD and MCI. Utilization of acetylcholinesterase inhibitors and NMDA receptor inhibitors was higher in AD and DLB than in MCI, while there was no significant difference between AD and DLB. All three groups were similar in gender, education level, marital status, CDR stage and disease history. As for the scales, the MMSE, MOCA, and ADL scores were lower in AD and DLB than in MCI, but there were no significant differences between the AD and DLB groups. Patients with AD and DLB had higher NPI and HAMD scores than those with MCI, and depression and NPS were most severe in DLB.
Changes in Three Groups Between Baseline and 1-Year Follow-Up
As shown in Figure 2, MMSE scores declined in 42% of MCI, 54.3% of AD and 72.7% of DLB patients during the COVID-19 pandemic, but the difference was significant only between MCI and DLB. Also, there was an increasing trend from MCI to AD and DLB in the rate of deterioration of NPI and ADL. Table 2 shows the decline of PA, social contact, cognition and neuropsychiatric symptoms in patients with MCI, AD and DLB during the COVID-19 pandemic compared to the pre-pandemic situation. PA and social contact were both significantly decreased during the pandemic period in all three groups. Cognitive function, evaluated by MMSE and MOCA, and NPS evaluated by NPI and HAMD, did not show obvious change in patients with MCI, but in patients with AD, MMSE, MOCA, NPI, ADL, HAMD, and ESS were markedly worsened during the COVID-19 pandemic. DLB patients also had lower MMSE and MOCA scores compared to the pre-pandemic baseline scores. However, the NPI, HAMD and ESS scores worsened or improved only slightly and without significant differences in the DLB group. Table 2 also shows the mean values and SD of the changes in PA, social contact, MMSE, MOCA, NPI, ADL, HAMD and ESS for the three groups. The decline of PA and social contact seemed similar among the three groups. We found significant differences in cognition between DLB and MCI but not between MCI and AD or AD and DLB. The MMSE score declined by 3.6 points and the MOCA by 2.5 points in the DLB patients during COVID-19. Corresponding to the cognitive decline were various degrees of decline in ADL for patients in the AD (6.4 ± 9.8) and DLB (8.6 ± 12.2) groups. The drop in ADL in AD and DLB was significantly greater than in MCI. We additionally noticed that the degree of deterioration of NPI, HAMD, and ESS was not significantly different among the three groups.
Predictors of Cognitive and Neuropsychiatric Decline in Alzheimer's Disease and Probable Dementia With Lewy Bodies
We checked for the association between cognitive decline (MMSE scores), worsening of neuropsychiatric symptoms (NPI scores) and hypothesized risk factors including baseline characteristics, lifestyle, sleep disturbance and baseline MMSE, NPI, HAMD, and ESS in patients with AD and DLB ( Table 3).
We found that the decline in PA (r = 0.274, P = 0.005) and the presence of sleep disturbance (r = 0.208, P = 0.033) were positively correlated to declining MMSE scores in AD. Several factor s were also correlated with NPI in AD, including social contact (r = 0.273, P = 0.005), sleep disturbance (r = 0.279, P = 0.004), baseline MMSE (r = −0.274, P = 0.005) and ESS (r = 0.298, P = 0.002). In DLB patients, there was no correlation between declining MMSE scores and other hypothesized factors, and the change in social contact (r = 0.496, P = 0.029) was the only factor with a significant positive correlation to the worsening of NPI scores. Multiple linear regression analysis was used to find predictors of cognitive decline and worsening NPS in AD ( Table 4). After adjusting for age, gender, disease duration, sleep disturbance and baseline MMSE, NPI, HAMD, and ESS, the results
DISCUSSION
This study is the first longitudinal 1-year follow-up study of dementia during the COVID-19 pandemic. The study included patients with MCI, AD, and DLB who experienced lockdown for about 4 months during the pandemic, and we investigated the influence of lockdown and stay-at-home mandates on the decline of cognitive and neuropsychiatric function. The study found that patients in the DLB group had the highest proportion of decline in cognitive and neuropsychiatric function. Moreover, the sudden decrease in PA during lockdown predicted the decline of MMSE scores at the 1-year follow-up in AD patients, and the decrease in social contact appeared to be a risk factor for the worsening of NPS. Our longitudinal study showed different degrees of deterioration of cognitive function, NPS and ADL after the lockdown in AD and DLB. At the 1-year follow-up, we found that patients with AD had an average cognitive decline (MMSE) of 1.6 points and those with DLB lost 3.6 points during the COVID-19 pandemic. The drop of MMSE scores in AD was similar to the rate of yearly decline (∼1.6 points per year) in previous studies without lockdown. However, in DLB, the mean annual decline in MMSE scores in a long-term multicenter cohort was 2.1 points (10). A prior single-center study with 67 patients who were followed for up to 5 years also showed a more rapid decline, by ∼1 point per year, in DLB compared to AD (11). In our study, the MMSE scores in patients DLB dropped more than twice as fast as those in patients with AD during the first year of COVID-19. This could imply that during lockdown and stay-at-home mandates, patients with DLB had a more rapid cognitive decline than ever before. DLB patients usually have complex pathologies such as cortical Lewy bodies and Alzheimer-type pathology, which may be a reason for a more rapid decline in cognition compared to AD (12). During the pandemic, most of the care of patients with dementia was carried out at home. The clinical profiles of DLB and AD are different, and DLB patients generally have more severe psychiatric symptoms and a higher prevalence of sleep disorders than those with AD. The care of patients with DLB may be more difficult, and a lack of available professional care may accelerate their decline.
That DLB is characterized by more rapid progression than MCI and AD is not only reflected in cognitive function, but also NPS. NPI deteriorated in ∼54.5% of the DLB patients, compared to 43.8% of AD and 22.0% of MCI patients during the first year of the COVID-19 pandemic. It is notable that the increase NPI scores (2.7 points) during 1 year was significant in AD, while the change of 2.5 points in DLB was not significant. This could be because the NPI scores were much higher at baseline in DLB compared to AD, so that the NPI increase of 2.5 points in DLB was not as obvious. Despite this our findings are still consistent with previous studies. Researchers conducted a multicenter nation-wide survey Italy and interviewed 4,913 participants with AD, DLB, frontotemporal dementia (FTD) and vascular dementia (VD) at 1 month after the imposition of a quarantine and found that 59.6% patients had worsened behavioral and psychological symptoms of dementia (BPSD) (13). In addition, a study with a small sample size in Spain showed worsening NPI in AD (n = 20) and MCI (n = 20) by approximately 6 points and 4.5 points after 5 weeks of domiciliary confinement (14).
The collected evidence supports the role of lockdown and confinement in worsening cognition and NPS in patients with MCI and dementia. Further, using multiple linear regression analysis, we found that the decline of social contact was related to the increased NPI scores in AD and DLB and to declining MMSE scores in DLB during the first year of the COVID-19 pandemic. Pre-pandemic studies also suggested that social isolation could predict a steeper decline in cognition and NPS (15)(16)(17). Several hypotheses may explain the association between social isolation and cognitive function. Social isolation may lead to augmented stress reactivity that is linked to prolonged activation of the hypothalamic-pituitary-adrenal (HPA) axis and the sympathoadrenal system and to glucocorticoids resistance, which are assumed to have deleterious effects on the prefrontal cortex and the hippocampus (18,19). At the same time, oxidative stress affects the metabolic and peripheral immunity organs. Thus, the inflammation caused by stress might contribute to other common chronic diseases such stroke or diabetes, which are additional risk factors for cognitive decline (20). Furthermore, long term social isolation can induce loneliness, which is also known to have negative impacts on cognitive function and NPS (21,22).
PA, which is accepted as a modifiable risk factor in dementia (23), was sharply decreased in all participants during the lockdown, and the of multiple linear regression analysis showed a positive correlation between reduced PA and cognitive decline in AD. PA can improve the management of cardiovascular risk factors such as diabetes, hypertension and obesity, increase neurogenesis and synaptic plasticity and even predict brain function and structure (24).
Considering that previous studies were conducted immediately after 1 to 2 months of lockdown, the differences between our and those studies may be the short-term or longterm impact after lockdown on people with dementia. Few articles discuss this. Compared with the present study, patients tended to have worse NPI scores immediately after lockdown than at 6 months after lockdown. We thought that 6 months were enough for acute stress in dementia patients to be alleviated as they became adjusted to their new lifestyle.
The COVID-19 pandemic has had significant impact on the elderly, and particularly on elderly individuals with AD and related disorders. The epidemic began to worsen again during late 2020, and many countries in the world renewed lockdown policies, which is a new challenge for patients and caregivers. The results of this study suggest that patients should maintain regular exercise and socialization routines during COVID-19 confinement periods by means of homebased workouts that include endurance, resistance and balance exercises or app-based exercise training with online partners (25) and by taking advantage of virtual socialization through technologies such as social media, videoconferencing and internet training (26).
The strength of this study is that it is a longitudinal 1-year follow-up study after a lockdown period during the COVID-19 pandemic. We used a variety of scales to evaluate the global cognitive function and NPS of dementia patients at two time points to reveal the similarities and differences between different dementia types. Furthermore, we quantified PA and social contact and tried to predict risks of worsening cognition and NPS. There are also some limitations in this study. It is a singlecenter study and the sample size is small. The statistical analysis was limited by this and the present findings must be interpreted with caution. There exists memory bias. This study did not include the healthy control group with the COVID-19 pandemic and lockdown. Confinement and lockdown not only reduce the PA and social contact, but can also introduce other factors such as an unhealthy diet or poor sleep quality, which should be discussed in future research. Also, the cognitive fluctuation in DLB increased the difficulty of the follow-up works. Other related researches about patients and caregiver burdens are also currently underway.
CONCLUSION
Patients with MCI, AD and DLB had different progression of cognitive decline and NPS during the first year of the COVID-19 pandemic. Patients with DLB showed rapid worsening after the initial 4-month lockdown in China. Reduced PA and social contact during confinement had a long-term impact on cognition and NPS in dementia patients. During quarantine and stay-at-home mandates, caregivers should help patients with cognitive impairment and dementia to maintain home exercise routines of a certain intensity and frequency and to maintain social contact with friends and relatives by phone and internet.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary files, further inquiries can be directed to the corresponding author/s. | 2021-07-28T13:33:56.258Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "578defc13d658dfd5fa07e6f4ecc433c07572922",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.711658/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "578defc13d658dfd5fa07e6f4ecc433c07572922",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55650005 | pes2o/s2orc | v3-fos-license | DEM RECONSTRUCTION USING LIGHT FIELD AND BIDIRECTIONAL REFLECTANCE FUNCTION FROM MULTI-VIEW HIGH RESOLUTION SPATIAL IMAGES
This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction. * Corresponding author
INTRODUCTION
Although an extensive body of literature over the subject of dense DSM reconstruction exists, the reconstruction of a reliable DEM or DSM from visible passive optics sensors is still a challenging task nowadays.In particular, occlusion problems, radiometric variations due to specular objects, shadows and precise localisation are at the core of these challenges.This is especially true on complex scenes such as dense urban areas since they gather all these difficulties.Looking at the height estimation of points on registered images, one usually cast the problem into a disparity estimation problem in a specific geometry.We thus seek the displacement for each pixel between two registered images.Depending on the chosen formulation, many methods exist to solve the problem.The solutions are often a trade-off between pure local radiometric matches (with estimation errors due to the image noise), global priors over the displacement map (varying smoothly, sharp edges, heavy-tail distribution, etc) and the step for the discrete estimation.Such problems are sometimes named "multi-label problems", and graph-cut techniques (Boycov et al., 2001) seemed very promising although they only provided an estimation of the sought solution.Improvements were brought in specific cases (Ishikawa, 2003) which could be applied to disparity estimation.In this way, the work of (Pock et al., 2008;2010) seemed even more promising for a global solution was provided, with less memory consumption, in a continuous framework solving a convex problem with higher dimension.Approximate solutions to non-convex problems are also sought with semi-global matching (Hirschmuller, 2008) which provides an efficient linear-time algorithm and very high visual quality results.Other algorithms try to bring an estimate through filtering, such as volume cost filtering (Rhemann et al., 2011) using in this very special case additional information as a base for the support of the applied filters.On the other hand, when more than two images are used, most of these algorithms have to work on each or a few of couple images and then to decide which values are to be trusted.Other works have focused on using all the available information from the multiple views at the same time.The light field method (Kim et al., 2013) casts the disparity estimation problem on all views into a straight line seeking problem, which is very appealing when many views are available.
From a sensor point of view, a first problem comes from the fact that many satellites are not following a pinhole or projective geometry.Most of them are push-broom sensors and they do not follow the same geometric rules: epipolar lines become hyperbolas.However, recent studies from the CMLA (de Franchis et al., 2014a;2014b;2014c), on Pleaides imagery deal with adapting and correcting the push-broom geometry to use "on-the-shelf" computer vision disparity algorithms.To this end, they process the image on a tile-by-tile basis with location corrections to ensure that the approximation error of the hyperbolas remains within less than one tenth of a pixel.So to some extent the push-broom sensor geometry is not an issue.What seems more troublesome is the variability of the radiometry when the same scene is viewed from different points of view.In fact, the only model usually implemented is the Lambertian one, which assumes that the luminance reflected by a surface only varies as a function of the angle between the normal to the observed point and the lightning direction, not the view direction.A simpler way to put it is that no radiometric models are assumed when seeking disparities.Although this simple model is generally found in shape-from-shading algorithms, good results may follow (Courteille et al., 2004).
In this paper we propose to address the radiometry problem using a non Lambertian bi-directional reflectance function (BRDF).We chose the classic Cook-Torrance parametric model and combined it with adapted light field approach inspired from the paper (Kim et al., 2013) so as to benefit from all the view.Moreover we want our output to be in the shape of a cost volume, to allow multi-label problem solving algorithms (graph-cuts, semi-global matching, volume cost filtering) to refine our results.
The remaining of the paper is organized as follow: in Section 2, we first recall the principles of the light field and the chosen illumination function, then we explain how to adapt light field methods with push-broom sensors and how to shape the output so as to use post-filtering methods.We also highlight the difference with respect to the existing light field method.In Section 3, we present our dataset, first results, and some stateof-the-art DSM on the same scene.A discussion on the results obtained and the method follows in Section 4. Eventually we conclude on the possible improvements of the method and some of its possible uses.
LIGHT FIELD, ILLUMINATION AND PROPOSED METHOD
We first describe the light-field approach and its requirements.Then, we move to the selected illumination model its implementation within the light-field method.Eventually, we elaborate on the required shape of the output in a suitable way for state-of-the art post processing.
2.1 Light field (Bolles et al., 1987) describes the following requirements for light field imagery: 1) the camera movement is rectilinear at constant speed, 2) each acquisition is separated by the same amount of time (regular sampling), 3) the camera point of view is perpendicular to the movement direction.
Figure 1.Illustration of the light field requirements, epiplanar slice of the epibloc and epibloc (Bolles et al., 1987).
Considering a couple of consecutive images for a given feature, the relation between the distance and the camera path (D) depends on the focal distance of the camera (h), the distance between the acquisition (ΔX) and the observed displacement of the feature on the images (ΔU): When these requirements are met, a given feature on an image is moving along the horizontal direction U.The apparent velocity of the feature displacement only depends on its distance with respect to the moving direction of the camera.
From an image point of view, stacking the acquired images provides a 3D volume, slicing it along the U and time direction brings a plane on which features displacements are lines with different slopes, see Figure 1 for an illustration.
Using these principles (Kim et al., 2013) have designed a bottom-up approach identifying the most probable recognized lines based on simple filtering of the radiometry and edge estimation for the selection of the points on which to perform the line recognition at each scale.In fact, one drawback of the epiplanar representation is that features at similar height belonging to a same neighborhood and having similar radiometry may appear as a band of a given thickness, entailing an uncertainty on the recognized slope of the line.This is particularly visible on spatially homogeneous image patches, as illustrated on Figure 2. The classic light-field algorithm assumes that an object appears in the image sequence with the same radiometry.This hypothesis is reasonable if the incidence variations are within a few degrees of range.However, in the case of a satellite sequence, viewing angles range from -45° to 45°.In that situation, the Lambertian hypothesis is not suitable anymore.
Illumination model
A popular and generic way of describing light reflection on a hard surface is the bi-directional reflectance function (BRDF).The BRDF is defined as the ratio between exiting radiance and incoming irradiance, and accounts for all angular dependencies.
) ( Under the Lambertian hypothesis, the BRDF is the constant: where ρ is the surface reflectance (or albedo), i.e. the ratio between total reflected and incoming energy.Many BRDF models have been developed that are capable of accounting for non-Lambertian behaviors.For our light field approach, we selected the Cook-Torrance model (Cook and Torrance, 1981), with the Schlick approximation in the Fresnel term (Schlick, 1994).This reflectance function is the sum of two contributions, a pure Lambertian term and a micro facet term producing a semispecular reflection.The second term is obtained by modelling the surface as pure specular elementary facets, whose geometric orientation follows a given distribution, see Figure 3 for an illustration.The exact used formulations can be found thereafter: With l being the normed lightning vector, v the normed viewing vector, k l , η and σ being respectively the Lambertian coefficient, the refraction index and the roughness.All these values are to be considered for the observed point P and its geometry (in particular, its unitary normal vector).The micro-facets term is itself a product representing several parameters: Here, n denote the unitary normal at point P and h being the normed half-vector between l and n.Schlick's approximation for the Fresnel term is the following and uses n as the refraction index.
The Beckmann distribution of the facets and the geometry term use n as the unitary normal vector at point P: Our choice of the Cook-Torrance BRDF model was motivated by its simple parametric nature and the reasonable number of parameters involved.Using the satellite meta-data that come with the images, several parameters are known for each images, in particular the viewing direction and the lightning direction.To sum-up, the parameters to estimate for each imaged ground point is: -The surface orientation (normal vector n) -The Lambertian reflectance k l -Cook-Torrance parameters and n (surface roughness and refraction index) We hereafter denote by ϴ this set of parameters, possibly indexed for U, V coordinates.
Proposed method
We will now describe our method explaining the extensions that were brought to the light field approach.
Geometric and time frame corrections:
It is clear that in the general case of push-broom imaging the two geometric requirements from section 2.1 are not met.However, using a reprojection of the images, after refining through Euclidium, see (Magellium, 2013b), on a plane with constant height, we get images having the very same epipolar lines.Stacking them in sampling order, and slicing them in the (U, time) plane, we get an epipolar plane such as the one shown on Figure 4 frame : 461.000000 / 1009.000000Now, we have to compensate for the time acquisition of the image since they were sampled on a non-rectilinear path, the path of a feature point (e.g. the corner of a building) is not a line.On our data, a Pleiades acquisition on Melbourne, we get a specific curve for each feature point, as illustrated on Figure 4.
To correct this effect, we have to apply a time shift for each frame.This correction is computed with simple geometry relations with respect to the satellite path and the time between each acquisition.For practical purposes, we assume that our time frame indices span from 1 to 17. Eventually we get a simple formula for the location of a point on each frame from the U, V coordinates of a point on the nadir frame, its associated slope and multiplicative correction coefficients, denoted c: With this, we get all the observed radiance on each frame for a given point at a supposed slope.This observation is shaped in a 1D 17 real valued vector, using a possible frame-wise resampling in the U dimension.We hereafter denote obs(u, v, s) such a vector.Assuming we have a merit function for a such a vector, spanning a given slope interval for a given point allows us to determine, from the merit function point of view, its associated slope, thus enabling us to get its height.
Inverting the illumination function:
The inversion is casted in a simple least square problem, that is: This functional is non-convex, thus to better solve it we randomly draw a starting point belonging to an admissible set of parameters.For each starting point a gradient descent scheme using various fixed step size, spanning from 10 -9 to 10 -1 , is applied.Usually 100 iterations are enough to reach an acceptable approximation.Thanks to this method, we produce a cost volume corresponding, for each point seen in the central (nadir) frame and for a set of slopes, to the residual obtained when trying to fit the BRDF model to the 17 samples data.
Note that, in general, for sequence with high base over height ratio, multiple occlusion problems arise since only the highest points should be visible on all views.Iterative solutions exist (Kim et al., 2013) but with much more samples and a smaller base over height ratio.Due to time-consuming computations, we could not afford an iterative strategy were the visibility mask of each imaged point could be estimated.Consequently, we kept it to a minimum of three sets of profiles: 1.The full view for the highest points (17 samples) 2. The left view (9 samples) 3. The right view (9 samples).When all these profiles are computed, we pick the slope corresponding to the best score among the three available profiles for each element at position (u, v, s).An illustration of two profiles associated to a given epiplanar slice is provided on Figure 5.
Filtering/selection on the cost volume:
As mentioned earlier, we output a merit function for each tested slope on each point of the nadir image.We thus collect a volume whose total number of element equals to the number of pixels along the U direction times the number of pixels along the V direction times the number of slopes that were tested.It is usually better to try slopes separated by a fixed amount.This amount shall be such that it is greater than one pixel on the extreme views.
Once the whole cost volume is computed we aim at assigning a slope index for each point (u, v) thus constructing an optimal surface.A naive strategy is to take the minimum along the slope direction for each point of the nadir image.This solution brings very noisy slope maps since a very precise visibility of each point has not been estimated and the associated residues lead to errors in the slope estimation.This problem is often encountered in computer vision literature.We have tested several state-of-the-art algorithms to be used with our cost volume, namely: 1. Semi-global matching (SGM), 2. Volume cost filtering, 3. Variational approach.
Let us describe very briefly these three methods:
Semi-global Matching (SGM)
This method is used for disparity estimation in stereoscopic imagery.Starting with disparity costs (radiometric) for each displacement for each epipolar lines it propagates the sum of the cost of a best path for each disparity.This is done by choosing the minimum over several candidates for the evaluation of the current disparity while propagating along an epipolar direction: 1) an adjacent neighbor plus a fixed cost (P 1 ) 2) the previous cost along the line (same disparity) 3) the minimum cost for the previous points plus a second constant cost (P 2 ).This method then iterates in non epipolar directions, enabling a spatial regularization of the disparities using a non-convex regularization.In our case this method relies on a simple propagation of costs along a given direction in the (U, V) plane.Here, along the (1, 0) direction we get: The formula to apply in other directions is found in (Hirshmuller, 2008).In our case, we only applied the first eight directions.Once the new volume cost is computed, we take the minimum along the slope direction for each (u,v) point.
Volume Cost Filtering
This method relies on local filtering of the costs in the spatial dimensions.For each point, a weighted sum over the spatial neighboorhood is performed.The weights depend on an image I, hence the name "image guided filtering".
These weights take into account the local mean and standard deviation of the image, plus a given smoothing parameter denoted ε, see (Rhemann et al., 2011) for a complete description.
Once the new volume cost is computed, we simply take the minimum along the along the slope direction for each (u,v) point.Although interesting, this methods heavily rely on a relevant image, which, in our context, could not be found.
Variational Approach
In this method, we aim at finding the slope S for each point (u,v).As input, we use our cost volume here denoted ρ and a global prior over the field.The Total Variation prior was selected to allow sharp discontinuities in the final slope field, the resulting problem is: (15) Note that the previous problem is not convex and not solved directly, we first reformulate it using a higher dimension variable in a different space turning it in a "min max" convex optimization problem as explained in (Pock et al., 2008) and (Pock et al., 2010).We here directly get the best slope estimation once a stop condition is met (usually a given number of iteration).
Global pipeline:
We here sum-up the various processing that were used in this chain.Figure 6.Global processing chain.
Data set
The data set consists of 17 images taken by Pleiades with almost symmetric incidence angles ranging from -50° to 50°.Thanks to the satellite agility the overall time required to acquire these images was only about 8 minutes, each image taking roughly 3 seconds to be acquired, see (Kubik et al., 2012) for a better overview of the Pleiades satellite.
Figure 7. Illustration of the Melbourne Pléiades sequence, on first, nadir and last image.All these were reprojected on a plane with constant height, the attitude were corrected through refining using Euclidium.
A preprocessing was first used on the images to set them in reflectance instead of a radiance digital count.Calibration coefficient, solar radiance and sun direction required to operate this computation were found in the meta-data.A global trivial surface normal was used in the process, as shown on Equation 16, where X radio count stands for the radio count, G for the calibration coefficient of the panchromatic band and L in for the sun irradiance:
Results
We first illustrate the observed scene with the nadir view, see Figure 8, which consists of a set of high buildings and trees.We recall that the whole image sequence was used in the height estimation.However we have focused on a patch of size 1000x200 due to very high computation times.
More than 200 slopes were tested, with the upper and lower value falling in the range of the visible slopes in the epiplanar slices; Note that computation of radiometric residue can be easily processed in parallel since the computations on each epiplanar slice are independent.Some of the filtering method could not be done in parallel and required sequential processing.
The overall computation time was huge, roughly 4 days on a standard desktop PC with an uneven repartition, 80% of the time was spent on the residue computation (for the three profiles) the remaining time for the filtering process.Unfortunately, we could not find a suitable image for the volume cost filtering on this specific scene, and all our attempts resulted in over smoothing of the height.The results appear to be noisy or inaccurate on homogeneous areas and in shadow areas, the regularizations helps to prevent large discrepancies.
Comparison
The comparison here cannot be done on ground truth since, no lidar acquisition were available on the studied area.We thus just visually compare with some existing solutions.First with S2P2 CNES solution on the same region using a Pléiades triplet: Then with another solution from our company, see (Magellium, 2013a) for more details, still with a Pleiades triplet: Obviously the S2P2 pipeline gives better results, however we note that the edges on the buildings seems quite sharp on our method, especially when the SGM filtering is used.
DISCUSSION
From the previous results and experiments, it is clear that the main drawback of our method is its computation time.In fact the optimization process to estimate the BRDF parameter from the observed samples should be done in a more efficient way.For example, a first guess could be done for the parameter based on the observed samples (from learning or classification methods) and then a descent gradient algorithm should refine the estimated parameters.Another huge optimization would be to have a guess for the slope values or at least to narrow the range of sought slopes.Such a range could be derived from a low-resolution reference MNT, such as the NASA SRTM database.Another choice would be to maximize the alignment of the structure tensor in the epiplanar slices.Another important drawback of our algorithm is its weakness over the homogeneous areas since the variation of their reflectance on the image sequence is very small which entails ambiguous slope choices.Although the regularization helps in this matter, as all the possible choices have more or less the same residue value in the cost volume, it is still only a global prior.In the shadow zones, our BRDF model itself is not relevant since simplifications (e.g.punctual light source at infinity) were made, maybe simpler models on these areas would perform better.
On the other hand, the parameters obtained from the inversion could be reused in segmentation/classification algorithms, although we have not investigated this matter, the obtained maps seems promising, especially on specular areas due to metallic part, see Figure 13 for an example.
CONCLUSION
To conclude, in this study we have provided a way to enlarge light field methods on non-pinhole sensors with non-rectilinear motion using a BRDF model.Although our test case only consists of 17 images, which are very hard conditions for a light field approach, the nominal case for DSM production is in general restricted to 3 or even 2 images of the same scene.More is to be done to speed up the computations or at least to reduce the required number of tested slopes.On the other hand the BRDF parameter extraction could lead to relevant classification/segmentation of the observed materials.Future work will focus on the reduction of the number of the required views and on a more in depth comparison and evaluation of the produced results.
Figure 2 .
Figure 2.An image zone (U,V) with a homogeneous roof and the epiplanar slice (U, Time) associated to the yellow line.The possible uncertainty of the slopes clearly depends on homogeneity of the roof.Since the sensor motion is neither rectilinear nor at constant velocity, lines are in a wedge shape form.
Figure 3 .
Figure 3. Illustration of the Cook-Torrance surface reflectance for increasing roughness.
Figure 4 .
Figure 4. Slice in the Epi-bloc along the (U,Time) direction for a given point along the V direction.
Figure 5 .
Figure 5. Top: epiplanar slice in the (u,t) direction.Middle: associated full profile cost in the (u,s) direction.Bottom: associated right profile cost in the (u,s) direction.For costs, dark values show small residues and bright values show high residues.
Figure 8 .
Figure 8. Nadir view for our test scene, the image dynamic was slightly changed to make details more visible.
Figure 9 .
Figure 9. our method with the variational optimization.
Figure 10 .
Figure 10.our method with the SGM filtering
Figure 13
Figure 13.(Top) Lambertian map obtained from the BRDF inversion.(Bottom) Roughness maps obtained from the BRDF inversion.Most of the blue zone underline observed specular behaviour. | 2018-12-11T16:51:57.827Z | 2016-06-09T00:00:00.000 | {
"year": 2016,
"sha1": "6df6125855e457e6162550ec0c8a1df9e848bcdb",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B3/503/2016/isprs-archives-XLI-B3-503-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6df6125855e457e6162550ec0c8a1df9e848bcdb",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Geography"
]
} |
218858321 | pes2o/s2orc | v3-fos-license | Tsinghua
: The user-generated social media messages usually contain considerable multimodal content. Such messages are usually short and lack explicit sentiment words. However, we can understand the sentiment associated with such messages by analyzing the context, which is essential to improve the sentiment analysis performance. Unfortunately, majority of the existing studies consider the impact of contextual information based on a single data model. In this study, we propose a novel model for performing context-aware user sentiment analysis. This model involves the semantic correlation of different modalities and the effects of tweet context information. Based on our experimental results obtained using the Twitter dataset, our approach is observed to outperform the other existing methods in analysing user sentiment.
Introduction
Microblogging social networks have become one of the most useful ways for people to express personal opinions and sentiments. Sentiment analysis aims to automatically analyze the user-generated data to discover the sentiments of various users toward products, services, and events [1] . Sentiment analysis is essential for analyzing individual behavior and can be used in several applications, such as forecasting political election results [2] , mental health care [3] , review analysis [4] , and product analysis [5] . Bo Unlike traditional social media, such as newspapers, online social media contains a large amount of multimodal data that can provide a considerably large number of clues for estimating sentiments when compared with that provided by words alone. With the increasing prevalence of smartphones, a growing number of users are inclined to post multimodal messages to express themselves on social network. On the Sina Weibo platform, 95% of the image tweets are accompanied with texts [6] , whereas 99% of the image tweets are accompanied with textual content on Twitter [7] . Thus, different modalities in a tweet can be combined for performing sentiment analysis.
Further, the tweets posted on social media usually present abundant contextual information, such as the timelines of the users and the comments of other people. This contextual information is helpful for conducting sentiment analysis because it can comprehensively characterize the contextual attributes of tweet streams. For example, Fig. 1a shows two sequential tweets posted by the same user in an hour. Both the tweets reflect the user's sadness because of the death of Carrier Fisher, indicating that the tweets posted in a short period of time are often sentimentally related. The similar condition can be observed in both the tweets and comments. As depicted in Fig. 1b, the tweet shows a smiling girl with the sentence, "Smile :) it costs nothing!", reflecting a positive sentiment. This sentiment can be further confirmed by the comments posted by other users with respect to this tweet. A major challenge associated with conducting social media user sentiment analysis is how to model the semantic correlation of different modalities based on the impact of contextual information.
In this study, we use these two types of contextual information along with multimodal data to analyze the semantic correlations that exist among different modalities. Further, we formulate the sentiment factors as latent variables constrained by the sentiment and topic distributions. We subsequently use the probabilistic graphical model technology to characterize the relations that exist between multimodal data and their contextual information. We also propose a sampling algorithm to obtain solutions for our model. The experimental results denote that our model outperforms other methods with respect to sentiment analysis in a multimodal scenario.
The remainder of this paper is as follows. Section 2 provides an overview of the studies conducted in relation to sentiment analysis. Section 3 formulates our sentiment analysis problem and introduces the construction of our Context-Aware Sentiment Analysis (CASA) model. Section 4 presents the sampling algorithms for CASA model inference, and Section 5 illustrates the experimental setup. Extensive experimental results are reported in Section 6. Finally, Section 7 presents the conclusions of our study.
Related Work
In this section, we introduce studies related to the visual feature representation of images. Further, we present multimodal sentiment analysis in Section 2.2. Finally, we elaborate on the status of our context-based sentiment analysis research.
Visual feature representation of images
Images contain several clues for conducting sentiment analysis. One of the most important challenges associated with image sentiment analysis is how to obtain suitable visual features that can reflect the emotions of users. Majority of the existing work conducted in this field is based on low-level visual features [8][9][10][11] , such as color, texture, and shape. Unfortunately, a considerable affective semantic gap exists between the low-level visual features and the sentiments conveyed by the images. To alleviate this problem, several studies have begun to focus on mid-and high-level visual features. The Bag-of-Visual-Words (BoVW) [12] method maps the key points of images to the visual word vectors that can reveal the characteristics of the images. An alternative model, the principles-of-art-based emotion feature [13] model, unifies various features derived based on different principles such as symmetry, harmony, and gradation. Another model known as Sentribute [14] extracts the lowlevel features of images and uses a training classifier to generate 102 mid-level attributes to represent these images. In contrast to the aforementioned methods, the Adjective Noun Pair (ANP) [15] method constructs a large-scale visual sentiment ontology to detect the presence of ANPs in an image instead of representing the different features of images.
Multimodal sentiment analysis
In case of multimodal sentiment analysis, majority of the existing studies have focused on the fusion of multimodal data that includes the following two main methods: early fusion and late fusion. The early fusion method concatenates the textual and visual features into a single feature vector as the input for the sentiment analysis model. The late fusion method first analyzes the textual and visual data, and then combines the output results of different models. Researchers used the early fusion method to generate a joint representation based on different modalities and transmitted it to the downstream classifiers. For example, Wang et al. [16] modeled texts and images in a unified bag-of-words representation and used logistic regression to analyze the sentiments associated with microblogs. Katsurai and Satoh [17] used canonical correlation analysis to project the features of different modalities into a latent embedding space for obtaining a strong correlation among these modalities. You et al. [18] developed a cross-modality regression algorithm to ensure agreement among the sentiment labels predicted using different modality features. Baecchi et al. [19] associated continuous bag-of-words for text feature extraction with a denoising autoencoder for performing image feature extraction and used neural networks to fuse multimodal features for conducting sentiment analysis. Xu and Mao [20] proposed a deep semantic network to combine the features of images and text in a tweet. Although the early fusion methods could capture the correlation among different modalities, the fusion features lacked explicit interpretability, which made the late fusion method a good alternative. The late fusion methods combine the prediction results obtained using several different modalities. For example, Niu et al. [21] proposed a baseline for conducting multimodal sentiment analysis using the late fusion method to combine the analytical results of the textual and visual features. Besides, Cao et al. [22] employed a similar late fusion process to combine the textual and visual sentiment results for conducting sentiment analysis.
The aforementioned methods mainly investigated the fusion approaches for multimodal data, but rarely considered the impact of the contextual information. However, the tweets posted on social media are not isolated and contain abundant contextual information. The contextual information implies the environmental attributes and provides supplemental information for identifying the sentiment associated with a tweet. Further, the short length of the tweets and implicit sentiment words lead to the presence of considerable challenges in understanding the tweet sentiment, which increases the importance of the contextual information for conducting tweet sentiment analysis.
Context-based sentiment analysis
The contextual information available from tweets characterizes their environmental properties, such as the locations, from which the tweets were tweeted, hashtags, and comments. The contextual information modeling methods comprise matrix factorization and graph models. The matrix-factorization-based methods are superior for mining the latent factors in data. For example, Hu et al. [23] decomposed the message content matrix into user-text and user-user matrices to identify potential relations among different text tweets. They also used the latent relation to conduct sentiment analysis. In another study, Hu et al. [24] used the matrix factorization method to extract emotional clues from the post-word matrix and utilized that information to infer the sentiment labels associated with the posts. Furthermore, Wang et al. [25] proposed a non-negative matrix trifactorization framework to incorporate multiple modalities for identifying the sentiment conveyed by images.
Unfortunately, the computation overhead associated with the matrix factorization methods is often huge. The sparseness of data in social media is also problematic for the application of these methods. In contrast, the probabilistic graphical models can explicitly represent the correlations among different factors and offer an acceptable level of computation time. For example, Yang et al. [8] proposed an emotion learning method by jointly modeling images and comments. Wang et al. [10] considered the impact of social influence and temporal correlation factors on the prediction of the emotional status of users in image-heavy social networks. Yang et al. [11] developed a probabilistic framework to predict the emotional status of various users based on their emotional status histories and social structures in image-based social networks. Vanzo et al. [26] modeled a sequence of tweets related to the same conversation or topic and used SVM HMM to predict tweet sentiments. Zhao et al. [27] proposed a method to predict the continuous probability distribution of image emotions in the valence arousal space.
Based on the aforementioned discussion, both the multimodal data and contextual information, especially obtained from the users' timelines and comments on their tweets, should be considered for achieving improved sentiment analysis performance. Therefore, we propose a novel CASA model, which will be elaborated in the subsequent section.
Sentiment Analysis Model
As mentioned in the previous section, multimodal sentiment analysis rarely considers the contextual information while determining the tweet sentiment. In this section, we propose an unsupervised learning method named CASA to analyze the tweet sentiment using multimodal data based on the contextual information. Our method uses untagged data to mine latent factors from the massive dataset, reducing the cost of manual data tagging. Further, we define the problem in Section 3.1. In Section 3.3, our model is introduced after we presented a set of reasonable hypotheses in Section 3.2.
Problem statement
Given a user set U D fu 1 ; u 2 ; : : : ; u jU j g, tweet set 1 ; w 2 ; : : : ; w jW j g, and visual vocabulary V D fv 1 ; v 2 ; : : : ; v jV j g, we can denote each tweet as d D fu; Given to denote the comment set added to tweet d , and each comment r 2 R d is denoted as a word sequence, the length of this sequence is L r , and each element (word) can be denoted as a word vector like of E x r , which is generated from the textual vocabulary W .
Based on the formulation of tweets and comments, the sentiment space and contextual information can be defined as follows: Definition 1 Sentiment space: A set containing all the possible sentiment values is considered to be the sentiment space. In this study, we define it as fpositive, neutral, negativeg.
Herein, we use the impact of the contextual information associated with a tweet to derive the tweet sentiment; specifically, two types of context information are involved: (1) users' timelines: the impact of users' historical sentiment states on the newly posted tweets; and (2) comments on tweets: the sentiment correlations that exist among comments and tweets. In formal terms, we can define the tweet contextual information as follows: Definition 2 Tweet contextual information: For a given user u, we sort all the tweets according to the posting time; for the i -th tweet d u;i , we consider the previous tweet d u;i 1 and the comment set R r as contextual information.
Given the aforementioned formulations and definitions, the sentiment analysis problem can be defined as follows. Based on the semantic correlation of different modalities and the impact of contextual information for a given tweet d , we aim to determine the sentiment distribution of d under the sentiment space fpositive, neutral, negativeg.
Observations and assumptions
The major task of the probability topic model is to discover the correlations among different variables. CASA fuses the latent sentiment factor with the contextual information to construct the generation process of posting tweets and comments. Five hypotheses related to the model are proposed based on the observations of the behavior of posting tweets and comments on social media.
(1) Sentiment labels are associated with topics. The words in different topics may reflect various sentiments [28] . For example, "unpredictable" is negative in "unpredictable steering", but positive in "unpredictable plot". Similarly, depending on the situation, cool-colored images can also express positive ("peaceful") or negative sentiments ("lost and blue"). Thus, we simultaneously model sentiments and topics in this study.
(2) One tweet contains one topic. In social media, one tweet usually contains a single topic. This is caused by the limited character count associated with tweets; thus, tweeting about diverse topics in one short tweet is unrealistic. For instance, tweets have been restricted to 140 characters since the establishment of Twitter, but this character count has been increased to 280 characters from September 2017. Currently, users may use a maximum of four images. Accordingly, majority of the tweets are only related to one obvious topic.
(3) Different modalities exhibit sentiment semantic correlations in the same tweet. In general, the different modalities in a tweet correspond with each other, and tweets are consistent in terms of sentiment expression. Therefore, the text and images in a tweet are assumed to be related to the same topic and exhibit the same sentiment distribution.
(4) Comments can reveal the sentiment of target tweets. The reviewers who post comments under tweets are generally influenced by the sentiment associated with these tweets; the comments are correlated with the corresponding tweets from the sentiment perspective. However, different comments reflect the tweet sentiment to varying extents.
(5) Users' historical tweets in the recent past influence their current sentiment status. Users' sentiments are normally stable over the short term and are highly dependent on their sentiments in the recent past because of the influence of temporal neighborhood information. We assume that the tweets posted in the recent past by a user are sentiment related to construct a correlation among tweets and to ensure model simplicity.
Based on the aforementioned hypotheses, we propose a CASA topic model to describe the generation process of tweets and the corresponding comments. The model exhibits three important characteristics. (1) The sentiment semantic correlation among different modalities is constrained by the overall sentiment distribution and topic in the same tweet. (2) Based on the influence of the temporal neighborhood contextual information, the tweets posted by the same user in the recent past are sentimentally related. (3) The effect of the comment contextual information is considered by introducing a Bernoulli parameter for each comment to bridge the comment to the original tweet.
Model construction
According to the five hypotheses proposed in the previous subsection, a Bayesian graphical model for sentiment analysis called CASA is conceived by combining the tweets and their contextual information. Figure 2 illustrates the structure of our model, and the notations of the model parameters are presented in Table 1. For the tweet generation process, we use the latent Dirichlet allocation method to develop connections between sentiments and tweets. For obtaining the contextual information, we consider both the comments and the user timelines. The Further, for tweets containing only text or images, we only need to sample the corresponding content. The tweet generation process can be expressed as a joint probability of the topic z d , tweet sentiment distribution E d , textual word w d and its sentiment label E s w d , and visual word v d and its sentiment label E s v d : where˚D f E 11 ; : : : ; E jT jjS j g and H D fE Á 11 ; : : : ; E Á jT jjSj g: Comment generative process: During the comment generative process, the sentiment label e is sampled from the comment's own sentiment distribution E r Dir. E ı/ or the overall sentiment distribution E d Dir.Ę/, depending on the situation. We use E r Beta. E / to represent how likely it is that the sentiment of a comment r is influenced by the sentiment of its corresponding tweet d . Then, a latent variable is sampled as c Binomial.E r /, which indicates whether the word is influenced by the sentiment of the corresponding tweet d . If c D 1, then the sentiment label e is sampled according to E d , otherwise, we sample e according to the comment's own sentiment distribution E r . Finally, the word x is determined as The comment generation process can be expressed as the joint probability of the tweet sentiment distribution E d , topic z d , E r , comment sentiment distribution E r , words in comment E x r , words' sentiment correlation variable E c r , and words' sentiment label E e r .
Correlation of the adjacent tweets: In Fig. 2 (green block), the correlation between adjacent tweets is denoted using the red dashed line that connects E d and E d 1 . Based on this, the sequence of f E 1 ; E 2 ; : : : ; E jD u j g forms a Markov Random Field (MRF), illustrated in Fig. 3. For any pair of . E i ; E i C1 / in Fig. 3, we define the potential function as h.t i ; t i C1 / D e . !t iC1 t i / is the exponential decay function, which describes the time influence, and ! is the decay constant; l u is the user-specific weight parameter, which can describe the user sentiment fluctuation degree.
Further, we place an exponential prior on l u with the parameter , where is derived from a Gamma prior with parameters a and b, Finally, the joint probability of the model can be deduced as follows: where A D fE "; Ę; Ě ; E ; E ı; E ; !; a; bg is the set of hyper-parameters; D f E Â 1 ; : : : ; E Â jU j gI˘D f E 1 ; : : : ; E d ; : : : ; E jDj g; T D fE 1 ; : : : ; E jRj gI P D f E 1 ; : : : ; E r ; : : : ; E jRj g.
Inference
We infer the sampling formula of the latent variables based on the conjunction relation between the binomial and beta distributions as well as the conjunction relation between the multinomial and Dirichlet distributions. After obtaining the sampling formula, we use the Metropolis-within-Gibbs sampling algorithm [29] to explicitly sample the parameter set Here, E is sampled using the Metropolis-Hastings algorithm [30] under the Gibbs sampling framework [31] . The other unknown parameters, including E Â ; E '; E Á; E ; and E , can be obtained from the sampling results.
The sampling rules for the variables are given as follows: ( where n k u is the number of tweets related to topic k posted by user u; n .w/ ks is the number of times that the textual term w is assigned to the topic k and sentiment s; vn .v/ ks is the number of times that the visual term v is assigned to the topic k and sentiment s; and :d denotes a quantity excluding the current instance. (2) s w i is the sentiment variable of the textual word w i in tweet d .
P .n .j / kl;:i Cˇj / where n .0/ r and n .1/ r represent the number of times that the latent variable c is sampled to values of 0 and 1, respectively; cn .l/ r0 is the number of times e assigned to sentiment l in comment r when corresponding c is equal to 0.
(5) l u is the user-specific weight parameter.
(7) E d is sentiment distribution parameters for tweet d . We resort to a Metropolis-Hastings step to sample E d . Given all the current assignments, the proposed distribution can be defined as The acceptance ratio can be derived as .
where (14) and (15), we can obtain the sampling rule of E d at step t C 1: (1) Generate a candidate E d according to Eq. (14); (2) Calculate the acceptance ratio˛. E .t / d ; E d / using Eq. (15); (3) Sample a random number u Uniform.0; 1/; d . The sampling process includes two stages. In the first stage, burn-in sampling is performed for the first M steps of the total I iterations. In the second stage, E d is estimated using the mean value of the results obtained from the remaining .I M / steps, After the sampling process of E l; E z; E s w ; E s v ; E c; E e; ; and E , we can use the sampling results as posterior distribution and the prior distribution determined by the hyperparameters to calculate the likelihood parameter ;˚; H; T; and P . The updating rules for ;˚; H; T; and P can be given as follows: (1) E Â u is the topic distribution specific to user u.
' ks denotes the parameter of a multinomial distribution in case of textual terms given the topic index k and sentiment index s.
(3) E Á ks denotes the parameter of a multinomial distribution in case of visual terms given the topic index k and sentiment index s.
(4) E r denotes the parameter of the Bernoulli distribution in case of the latent variable c specific to the comment r.
(5) E r denotes the parameter of a sentiment distribution specific to the comment r.
The details of our sampling algorithm based on the Metropolis-within-Gibbs sampling method for parameter estimation in CASA are presented in Algorithm 1.
Dataset collection and preprocessing
In this section, we considered Twitter to be our data source for evaluating our model. The dataset used in this study contained all the tweets posted by users in the form of text or image content and all the comments on these tweets. First, we collected the original English tweets posted on May 2014 and the authors' profiles for all text words w i 2 T d and visual words v j 2 I d do The users who posted less than 15 tweets were excluded from our final dataset, and the corresponding tweets of these users were also eliminated.
Given the irregularity of tweets in the original dataset, data preprocessing was conducted for both textual and visual contents. On the basis of the unsupervised Bayesian model proposed in this study, these unlabeled data can be directly used for model training after data preprocessing. Table 2 summarizes the basic statistical information obtained in the final dataset after text and image preprocessing. The detailed operations can be presented as follows.
Text preprocessing
For text preprocessing, we initially passed the text through an NLTK TweetTokenizer (A famous natural language toolkit, https://www.nltk.org.) to obtain a token list. In social media circumstances, a word containing more than three same letters consecutively exhibits high probability to be an irregular word [32] . For example, some users may use "laaaaaaugh" instead of "laugh" to express their emotions. Therefore, we reduced the repetition length to three if a word containing more than three repeated consecutive letters. Special tokens, such as punctuation marks, URLs, and hashtags, were filtered but emojis and emoticons were retained because of their contributions to sentiment analysis [33] . Subsequently, part-of-speech tagging and named entity recognition were conducted. Spell checking was also conducted using PyEnchant (A spellchecking library for python, https://github.com/rfk/pyenchant.), and stemming work is additionally executed. Subsequently, we applied a frequency filter to omit words that occurred fewer than five times, dropped the stop words, transformed all the words into lower case, and discarded short text-only tweets and comments that contained less than four words. Finally, we obtained 48 740 unique textual words, including 48 099 textual words in lower case and 641 emoji or emoticons.
Image preprocessing
Because topic models were superior for processing discrete data and social media data contained unstructured features, we adopted a Bag-Of-Visual-Words (BOVW) model to transform each image into a bag of emotional words. We initially segmented each image into patches using a graph-based algorithm [34] , and subsequently extracted the seven types of features presented in Table 3 for each patch. Additionally, we adopted a z-score method to standardize the features. The k-means method was exploited to construct a visual dictionary. The value of k, also called the size of the dictionary, was determined based on the experimental [36] 3 use of the distortion function. The distortion function was used to calculate the total distance between each instance and the centroid of these instances. The distortion values in case of different numbers of clusters are presented in Fig. 4. Distortion decreased with an increase in the number of clusters. A variation occurred when the number of clusters reached 750; therefore, we set the number of clusters to 750. Finally, each patch was quantified into the closest visual word.
Model priors definition
To increase the impact of sentiments in CASA model, we add an additional transformation matrix to modify the Dirichlet prior Ě , so that word prior information can be encoded into CASA model according to Ref. [37]. All the elements of Ě are initialized with 0.01, and all the elements of are initialized with 1. Given a sentiment lexicon SD, each term w 2 W and sentiment label l 2 S; S D f0; 1; 2g (for simplicity, "0" represents "negative", "1" represents "neutral", and "2" represents "positive"), lw is updated as follows when w occurs in SD, where S.w/ is the prior sentiment label of w in SD. Finally,ˇl w is updated usingˇl w D lw ˇl w . On the basis of prior Ě , the term in sentiment lexicon can only be obtained from the word distribution of the corresponding sentiment. For example, the word "beautiful" with index j in the textual vocabulary occurs in the sentiment lexicon, and its sentiment label is "positive" (S.w/ D l). The corresponding row in is OE0; 0; 1, andˇ j is updated asˇ j D OE0; 0; 0:01. Therefore, "beautiful" can only be obtained from the word distributions specific to a "positive" sentiment. If the term is not present in SD, thenˇ j D OE0:01; 0:01; 0:01.
The sentiment prior information was determined based on the textual words and emoji. In case of textual words, the sentiment prior information was extracted from MPQA (MPQA: http://mpqa.cs.pitt.edu/lexicons/ subj lexicon/), and SentiWordNet (SentiWordNet: http: //sentiwordnet.isti.cnr.it/). To guarantee the reliability of the sentiment prior, only the words with strong positive or negative orientations in MPQA with a sentiment value larger than 0.7 or smaller than -0.7 in SentiWordNet were extracted. For emojis, the sentiment prior was constructed in Ref. [38], which contained 751 emojis. Considering the emoji lexicon in Ref. [38], we extracted our emoji lexicon according to the following rules: (1) If the sentiment value is not less than 0.7, we set the prior polarity of this emoji as "positive"; (2) If the sentiment value is not larger than 0:3, we set the prior polarity of this emoji as "negative"; (3) If the ratio of the emoji occurring in negative tweets is less than 0.1, we set the prior polarity of this emoji as "positive" and "neutral".
Based on the aforementioned rules, the prior information statistics are presented in Table 4.
Parameter configuration
All the hyperparameters of Dirichlet priors, except Ě and E are symmetric, and the configuration for Ě is introduced in Section 5.2. In accordance with the relevant research [8,28,37] , other hyper-parameters were set as follows: D 50=jT j, D 0:01, ˛D .0:05 L D /=jSj (L D is the average length of tweets in the corpus), ı D .0:05 L R /=jSj (L R is the average length of comments in the corpus), In addition, the configurations for topic number (jT j) and iteration times (I ) need to be determined through experiments. Here, we used perplexity to set the topic number (jT j) and iteration times (I ). Perplexity [39] is defined as the reciprocal of the geometric mean of the likelihood of a test corpus. In this study, the perplexity of the CASA model can be defined as follows: .jT d jCjI d jCjR d j/ 9 > = > ; (24) where P .d / is the generating probability of the textual content E w d , visual content E v d , and comments R d of the tweet d , To set the value of I and jT j, we initially set I D 1000 experimentally and calculated the perplexity of the CASA model with different jT j. The perplexity values with different numbers of topics are presented in Fig. 5. As the number of topics increased, the perplexity of CASA decreased until it became flat at 25 topics. Therefore, we fixed jT j D 25 and calculated the perplexity of the CASA model at different iteration times. The perplexity values in case of different iteration times are depicted in Fig. 6. When I became 6 Experimental Results and Analysis
Sentiment annotations
To evaluate the sentiment classification performance, we manually labeled a portion of the tweets' sentiment. First, 1000 tweets containing both text and images were randomly selected from the experimental dataset. Second, these tweets were manually labeled using the sentiment set fpositive, neutral, negativeg. The final sentiment label, namely ground truth, was determined by the majority sentiment polarity results from related tweets. To ensure the reliability of the results, we only retained tweets with a voting proportion of more than 80%. Finally, we obtained the final labeled test dataset containing 456 tweets, including 225 positive tweets, 111 neutral tweets, and 120 negative tweets.
Comparison algorithms
In this subsection, we select five comparison methods in the field of social media sentiment analysis. The details of these methods are presented as follows.
CASA-reply: The CASA model proposed in this study was used without considering the influence of comments. We used the model to prove the efficiency of the comment context.
CASA-time: The CASA model proposed in this study was used without considering the influence of users' timelines. We used the model to prove the efficiency of this kind of contextual information.
SentiStrength [40] : We used a text sentiment analysis algorithm based on the sentiment lexicon, which is extensively used for short text sentiment detection in social media. We used this method to compare the efficiency of jointly modeling text and images.
SentiBank [15] : As an attribute representation designed for human affective computing, SentiBank includes 1200 ANPs, such as "cloudy moon" and "beautiful rose", which are carefully selected from the web data and represent human effects. SentiBank is intuitively suitable for conducting visual sentiment analysis. We used this method to compare the efficiency of jointly modeling text and images with the CASA model.
T-V-Early [21] : As a baseline for multimodal (text and image) sentiment analysis, this model uses GIST, LBP, and other feature extraction methods to represent the visual features and TF-IDF to represent the textual features. After feature extraction, three methods, including early fusion, late fusion, and Deep Boltzmann Machine (DBM), were used to detect the tweet sentiment. Previous experimental results show that early fusion is better than late fusion and the DBM with respect to the Twitter dataset. Therefore, we used the early fusion strategy and considered the contextual information, so that we can see the performance of our method compared with other models.
Results and analysis
The output of our CASA model is a 3-dimensional vector E d , where each entry indicates one of the sentiment polarities of tweet d . The final polarity depends on the entry with the maximal value, Polarity.d / D arg max s2fneg, neu, posg ds (26) In this study, the evaluation metrics included accuracy, macro-recall, macro-precision, and macro-F 1 [41] . These metrics are commonly used to measure the performance of multiclassification problems.
Model performance
To evaluate the efficiency of the CASA model, we used the preprocessed test dataset described in Section 5 and compared the CASA algorithm with the remaining competing algorithms introduced in Section 6.2 based on the four previously illustrated evaluation metrics. Figure 7 denotes the performance of the related algorithms based on which it can be stated that the CASA model clearly outperformed the others. With respect to accuracy, our model surpassed the second best technique by approximately 2.8%. In case of macro-precision, CASA performed 4.3% better than the second ranked one. Similar results were observed in case of macro-recall and macro-F 1 with 8.3% and 7.1% improvement, respectively, when compared with the second best technique. SentiBank performed the worst in general because of its limited image feature extraction capability, especially when the conformity between the images and texts was not explicit. T-V-Early performed better than SentiStrength, indicating that multiple modalities are beneficial for conducting sentiment analysis. The results reported from T-V-Early and CASA demonstrated that the contextual information of tweets could help improve the sentiment detection performance.
Context contribution analysis
As previously mentioned, two types of contextual information were considered in our model. In this section, we analyzed the contribution of these two types of contextual information when analyzing the tweet sentiment. As depicted in Fig. 8, CASA-reply and CASA-time performed worse than CASA. CASAtime outperformed CASA-reply, which indicated that the comment contextual information is more important than users' timelines in tweet sentiment analysis. The reason for the above result may be twofold. On the one hand, this observation can be attributed to the short length of tweets and the lack of explicit sentiment words in these tweets. However, by integrating the comments and tweets, we alleviated the shortcomings of the limited length and the lack of explicit sentiment words. On the other hand, the CASA model considered the correlations of tweets sent in the recent past, but the sentiment correlation of these adjacent tweets was sometimes slightly weak, which led to the result of CASA-reply being slightly worse than CASA-time.
Conclusion
In this study, we investigated the problem of social media sentiment analysis. A probability model called CASA is proposed for tweet sentiment analysis in which the semantic correlation of different modalities and the influence of the tweet contextual information are both considered. Through the comparison and analysis of the experimental results, the proposed CASA model was observed to efficiently detect the sentiments contained in multimodal tweets; both the types of contextual information used in our model can significantly improve the model performance.
Bo Liu received the PhD degree from Southeast University in 2007. She is currently an associate professor at the School of Computer Science in Southeast University, Nanjing, China. Her research interests include spammer detection in social network, the evolution of social community, social influence, and multiagent technology.
Shijiao Tang is currently a master student at the School of Computer Science and Technology in Southeat University, Nanjing, China. His research interests include event detection, event evolution, and sentiment analysis in social media.
Xiangguo Sun is currently a PhD candidate at the School of Computer Science and Technology, Southeast University, Nanjing, China. His research interests include social media analysis, user behaviors mining, network embedding, and sentiment analysis.
Qiaoyun Chen received the master degree from the School of Computer Science and Technology, Southeast University, Nanjing, China. She now works as a research assistant in Microsoft Research Asia. Her research interests include social media analysis, big data analysis, social influence, and user behavior modeling.
Jiuxin Cao received the PhD degree from Xi'an Jiaotong University in 2003. He is currently a professor at the School of Cyber Science and Engineering in Southeast University, Nanjing, China. His research interests include computer networks, social computing, behavior analysis, and big-data security and privacy preservation.
Junzhou Luo received the PhD degree from Southeast University in 2000. He is currently a full professor at the School of Computer Science and Engineering, Southeast University, Nanjing, China. His research interests include next-generation networks, protocol engineering, network security, cloud computing, and wireless LAN.
Shanshan Zhao received the PhD degree from Xi'an Jiaotong University in 2008. She is currently a research fellow at the Faculty of Engineer and Technology, University of West England, Bristol, UK. Her research interests include industrial IoT, optimization algorithms, multiscale modelling, and lightweight computational algorithm. | 2020-05-24T12:01:37.278Z | 2020-01-13T00:00:00.000 | {
"year": 2020,
"sha1": "950c7ef7f99600ee1cfce52d9fa7d5f84fe297fd",
"oa_license": null,
"oa_url": "https://ieeexplore.ieee.org/ielx7/5971803/8954860/08954871.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c9f0700bb7f7403ca9a5ba6919fab20d401fdafd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255938941 | pes2o/s2orc | v3-fos-license | Discovery and characterization of novel paramyxoviruses from bat samples in China
Many paramyxoviruses are responsible for a variety of mild to severe human and animal diseases. Based on the novel discoveries over the past several decades, the family Paramyxoviridae infecting various hosts across the world includes 4 subfamilies, 17 classified genera and 78 species now. However, no systematic surveys of bat paramyxoviruses are available from the Chinese mainland. In this study, 13,064 samples from 54 bat species were collected and a comprehensive paramyxovirus survey was conducted. We obtained 94 new genome sequences distributed across paramyxoviruses from 22 bat species in seven provinces. Bayesian phylodynamic and phylogenetic analyses showed that there were four different lineages in the Jeilongvirus genus. Based on available data, results of host and region switches showed that the bat colony was partial to interior, whereas the rodent colony was exported, and the felines and hedgehogs were most likely the intermediate hosts from Scotophilus spp. rather than rodents. Based on the evolutionary trend, genus Jeilongvirus may have originated from Mus spp. in Australia, then transmitted to bats and rodents in Africa, Asia and Europe, and finally to bats and rodents in America.
Introduction
Emerging infectious diseases pose serious threats to global economy, security and public health. Most human pathogens, including Marburg virus, Nipah virus, Hendra virus, Ebola virus, severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory coronavirus (MERS-CoV), SARS-CoV-2 and others, have animal origins and arose through cross-species transmission (Lau et al., 2005;Leroy et al., 2005;Memish et al., 2013;Smith and Wang, 2013;Swanepoel et al., 2007;Wang et al., 2006). These viruses probably originate from bat, the second most diverse mammalian order after rodent, comprising approximately 22% of all named mammal species (Letko et al., 2020). Bats are widely distributed geographically, have extensive species diversity and have unique behavior such as flight patterns, long life spans, gregarious roosting and mobility behaviors. Bats also have intimate interactions with human and livestock, regarded as natural reservoirs for a diverse pool of viruses and deserved to be studied in depth (Calisher et al., 2006;Drexler et al., 2012;Halpin et al., 2011).
Although sporadic studies contributed to this abundant list, no systematic surveys of bat paramyxoviruses are available for the Chinese mainland. Our team has conducted perennial viral surveys on rodents and bats across China and Southeast Asia to monitor and discover the potential zoonotic viral pathogens (Wu et al., 2018(Wu et al., , 2021. We established two continuously updated online virome databases, DBatVir (Chen et al., 2014) and DRodVir (Chen et al., 2017), and discovered Mojiang virus (Wu et al., 2014) of Henipavirus genus from rodents (Rattus flavipectus) in 2012 and Bat Ms-ParaV/Anhui2011 (Wu et al., 2016) of Jeilongvirus genus from bats (Miniopterus schreibersii) in 2016. Between 2016 and 2021, more than 13,000 samples from 54 bat species were collected and underwent a comprehensive virome survey. Based on preliminary work and existing resources, a retrospective study surveying paramyxoviruses was performed on multi-bat species sampled in China. The identification of diverse paramyxoviruses, as well as the new characterizations of their genome organizations, RNA editing sites, transmembrane helices sites, and glycosylation sites, increased our understanding for ecological distribution of bat paramyxoviruses in China, and spurred us to further investigate the host and region switches of such viruses.
Sample collection
Between 2016 and 2021, a total of 13,064 samples from 54 bat species were collected from 703 sampling sites in fourteen provinces (Guangxi, Guangdong, Yunnan, Sichuan, Hubei, Hainan, Zhejiang, Jiangxi, Guizhou, Hunan, Liaoning, Fujian, Anhui and Chongqing) as described previously (Wu et al., 2022). The captured bat species were initially determined from morphological features and subsequently confirmed using patagium with barcoding of mitochondrial cytochrome b. The sampling locations were recorded by geographic names and GPS coordinates of latitude and longitude. The swab samples of pharyngeal and anal in triplicate were immersed in virus sampling tubes (Yocon, China) with maintenance medium, temporarily stored at À20 C, transported to the laboratory and stored at À80 C.
Library construction and next generation sequencing
According to the bat species and the sampling sites, the swab samples were pooled by adding 1 mL of each maintenance medium into one new container. The pooled samples were then processed with a virus-particleprotected nucleic acid purification method used in previous research . The samples were centrifuged at 10,000Âg for 10 min to precipitate the impurities. The supernatants were filtered through a 0.45 μm polyvinylidene difluoride filter (Millipore, Germany) to remove eukaryotic and bacterial-sized particles. Then the filtered samples were centrifuged at 100,000Âg for 3 h at 4 C. The pellets re-suspended in Hank's balanced salt solution were digested in a cocktail of DNase and RNase enzymes (Turbo DNase, Ambion, USA; Benzonase, Novagen, Germany and RNase One, Promega, USA) to remove naked DNA and RNA at 37 C for 2 h. The viral nucleic acids were extracted using a QIAmp MinElute Virus Spin Kit (Qiagen, USA). The first-strand viral cDNA was synthesized using the primer K-8N and a SuperScript™ III First-Strand Synthesis system (Invitrogen, USA). The cDNA was further converted into double-stranded cDNA (ds cDNA) with a Klenow fragment (NEB, United States) at 37 C for 1 h. Sequence-independent PCR amplification was performed using primer K. The PCR fragments which are from 300 to 2000 bps were extracted by magnetic beads (Beckman Coulter, USA). The purified products were quality checked by Agilent 2100 and then mixed at equal molar concentrations. Libraries were constructed using the Nextera® XT DNA Sample Preparation Kit (Illumina, USA) according to the manufacturer's instructions, and sequenced with an Illumina HiSeq X Ten sequencer for a paired-end read of 150 bp. The sequence reads were filtered as previous criteria described (Yang et al., 2011). Clean data were generated after adaptor sequence, primer K sequence, and low-quality reads were removed.
Taxonomic assignment and genome assembly
The valid sequence reads were aligned to sequences in the NCBI nonredundant nucleotide database (NT) and non-redundant protein database (NR) using BLASTn and BLASTx, respectively. The taxonomies with the best BLAST scores (E score <10 À5 ) were parsed by MEGAN6. Extracted paramyxovirus reads were assembled by megahit v1.2.9 and the assembled sequences were used as references during genome sequencing.
Genome sequencing
The assembled contigs of paramyxoviruses and their closest reference sequence were identified by BLAST. The partial genomes were amplified with nested specific primers. The PCR product, between 1500 bp and 2000 bp, were gel purified and then sequenced. The remaining genomic sequences were determined using genome walking, 5 0 and 3 0 rapid amplification of cDNA ends (RACE). The ORFs of complete sequenced viruses were predicted with the ORFfinder of NCBI (https://www .ncbi.nlm.nih.gov/orffinder/).
Bayesian phylodynamic analysis
Aside from sequences obtained in this study, all Jeilongvirus sequences from GenBank were added to our dataset. The conserved region of 99 amino acids related to target fragment of universal primer was used as query for BLASTx, and then a total of 353 Jeilongvirus sequences were found. After removing redundant sequences with high identity from the same host and the same collection location, a total of 127 sequences were finally used for Bayesian phylogenetic analysis. Multiple sequence alignments of the nucleotide sequences were performed using MAFFT v7.475 (Katoh and Standley, 2013) followed by TrimAL to trim sequences. TempEst was used to assess the temporal structure within datasets. The datasets did not contain enough temporal signals to estimate substitution rates and the most recent common ancestor (TMRCA), therefore tip dates were not used. The best-fitting combination of substitution models GTR þ Gamma (4) þ Invariant nucleotide substitution model were selected according to the calculated result of Model-Finder (Kalyaanamoorthy et al., 2017). The analysis was run to select strict/lognormal uncorrelated relaxed clock, and coalescent models (constant population size/exponential growth/GMRF Bayesian Skyride). Host family and sampling region were reconstructed for the ancestral state of each node in the phylogenetic tree for two discrete traits and a symmetric trait substitution model was applied for BSSVS analysis. The BSSVS was applied to estimate the significance of pairwise switches between trait states using Bayes Factors (BF) computed in SpreaD3 (Bielejec et al., 2016) as a measure scale of statistical significance (Lemey et al., 2009). BF support scale was interpreted according to Jeffreys (1961). All BF > 3 states were presented through TBtools (Chen et al., 2020). Model combinations were compared, and the best-fitting model was selected using a Path sampling/stepping-stone sampling analysis (Baele et al., 2012(Baele et al., , 2013. Finally, a lognormal uncorrelated relaxed clock and a constant population size coalescent model. Each analysis was run for 5 Â 10 7 generations in BEAST v1.10.4 , with sampling every 5 Â 10 3 steps. Convergence of the chain was visualized in Tracer v1.7.1 and the effective sample size (ESS) of main parameters was greater than 200 after discarding the first 10% of the chain as burn-in. TreeAnnotator was used to summarize posterior tree distributions and annotated the estimated values to a maximum clade credibility (MCC) tree with a burn-in of 10%, which was visualized using FigTree v1.4.4. To assess the phylogenetic relationships among Jeilongvirus, a Haplotype Network was constructed with the last 100bp of 353 Jeilongvirus dataset using PopART through TCS network program. After combining redundant identical sequences, 274 kinds of haplotype were left.
Identity and phylogenetic analysis
The percent identity of the nucleotide and deduced amino acid sequences was assessed using MegAlign (DNA Star package Lasergene v.7.0.1). Phylogenetic analysis of the complete L protein amino acid sequences was performed using the maximum likelihood (ML) method available in iqtree (Nguyen et al., 2015) with 1000 UltraFastbootstrap (Hoang et al., 2018) replicates, employing the best-fit model LGþFþIþG4 chosen according to Bayesian Information Criterion (BIC) in ModelFinder (Kalyaanamoorthy et al., 2017). The resulting phylogenetic trees were visualized using Figtree v1.4.4.
Transmembrane helices and glycosylation site analysis
The amino acid sequences were used for predicting, the complete SH, TM and X genes for prediction of transmembrane helices with TMHMM, the complete F and G genes for prediction of N-linked glycosylation sites and O-GalNAc (mucin type) glycosylation sites with NetNGlyc 1.0 and NetOGlyc 4.0, respectively (https://services.healthtech.dtu.dk/).
Accession numbers
All genome sequences were submitted to GenBank. Accession numbers for the viruses are ON263473 to ON263566.
Virome and prevalence analysis for paramyxoviruses
A total of 40 sequences of bat paramyxoviruses were identified in 14 provinces and the Macao special administrative region of China and recorded in DBatVir and GenBank, thereinto 16 sequences originated from our previous study (Wu et al., 2016) (Supplementary Table S1). The 36 sequences were found from 15 bat species, while 4 belonged to the Orthorubulavirus genus of Rubulavirinae subfamily from Guangdong, Shanxi and Jilin provinces; 4 belonged to the Pararubulavirus genus of Rubulavirinae subfamily from Guangdong and Yunnan provinces; 1 belonged to the Henipavirus genus of Orthoparamyxovirinae subfamily from Yunnan Province; 27 belonged to the Jeilongvirus genus of Orthoparamyxovirinae subfamily. The other 4 sequences belonged to the Orthoavulavirus genus of Avulavirinae subfamily found in unclassified Chiroptera from Guangxi Province.
In this study, 13,064 samples from bats were pooled and applied for next generation sequencing as described previously (Wu et al., 2022). Meanwhile, a total of 760.3 GB of nucleotide clean data with 1, 718,361,529 valid reads was obtained from next generation sequencing. Among them, 612,529 reads (~0.036% of the total sequence reads) were matched with paramyxovirus proteins available in the NCBI NR database, and 291 out of 372 pools were found to be paramyxovirus-positive. The proportion of paramyxovirus-related reads in each pool varied from 0.00016% to 73.66%. Based on assembled contigs and nested specific primers, the PCR results confirmed that 65 out of total 372 pools were positive for paramyxovirus. Ninety-four new genome sequences distributed across paramyxovirus were obtained experimentally from 22 bat species in seven provinces (Guangdong, Guangxi, Yunnan, Hainan, Jiangxi, Table S2). Only one strain sequence BtEsp-ParaV/YN2017A, a 336 nt fragment of phosphoprotein gene, was determined to belong to the Orthorubulavirus genus of Rubulavirinae subfamily, the other 93 sequences belonged to the Jeilongvirus genus of Orthoparamyxovirinae subfamily.
In total, by combining current study data with published data, 134 sequences were discovered from 31 bat species and unclassified Chiroptera spp. in sixteen provinces and the Macao special administrative region of China (Fig. 1).
Derivation of global evolution trend
By using sequence alignment analysis in GenBank, the phylogenetic reconstruction of 127 partial Large protein (L) sequences based on the consensus degenerate primers (Tong et al., 2008) (PAR-F/R) was conducted. The more obvious clustering trend of the maximum clade credibility (MCC) tree was shown along with more sequences, bat species and regions (Fig. 2). All Jeilongvirus could be divided into four main lineages: (1) lineage 1 (L1), all the bat-borne paramyxoviruses formed a well-supported monophyletic cluster associated with the previously proposed Shaanvirus genus and distributed in Asia, North America and Africa; (2) lineage 2 (L2), contained rodent-borne paramyxoviruses across the world, with feline-borne paramyxoviruses from Germany and Japan evolving later in time; (3) lineage 3 (L3), with surprisingly more members, included bat-borne paramyxoviruses in multiple species from Asia alone, hedgehog-borne paramyxoviruses from Europe, and rodent-borne paramyxoviruses from North America and Africa; (4) lineage 4 (L4), covered North America and South America from bat-borne paramyxoviruses only. The viruses identified in this study were distributed in L1 and L3.
A total of 274 kinds of haplotype with different individual sizes were used in network construction ( Supplementary Fig. S1). The evolutions of bats among L1, L3 and L4 with rodents between L2 and L3 were intertwined rather than independent. The bat colony and rodent colony were aggregated, but the bat colony was invaded by some rodent populations. The seven species of Jeilongvirus in ICTV were not the ancestral viruses and leaned to the tips of tendency.
Host and region switches
A Bayesian stochastic search variable selection (BSSVS) procedure was used to identify the host and region switches of Jeilongvirus among bats, Fig. 2. Evolution overview of Jeilongvirus. The maximum clade credibility tree is inferred from partial L sequences. Sequences from bats are labeled in red, rodent sequences are labeled in blue, feline sequences are labeled in violet, and hedgehog sequence is labeled in yellow. The viruses identified in this study are labeled by green rectangles.
rodents, felines and hedgehogs with the above L sequences. Bayesian factors (BF) were calculated to estimate the significance of switches (Fig. 3).
Discovery of novel paramyxoviruses and phylogenetic analysis
Combining with the reference complete genomes from rodents, bats, hedgehogs and felines of recently studies proposed to Jeilongvirus, 17 novel representative sequences with complete or nearly complete genomes were selected for analysis. Using the complete L gene of the above sequences, phylogenetic reconstruction was conducted. As shown in Fig. 4A, the analysis of Jeilongvirus was the same as the evolutionary trend above. The L1 and L4 had bat species-specificity. L1 contained 16 novel paramyxoviruses in this study and 1 species in ICTV. L4 had three novel bat paramyxoviruses in Brazil that were previously suggested establishing a putative novel genus named Macrojêvirus (de Souza et al., 2021) of Orthoparamyxovirinae subfamily. Both L2 included six rodent species in ICTV and L3 included one novel bat paramyxovirus in this study had different host sources.
Genome organization of novel bat paramyxoviruses
Based on phylogenetic analysis of the complete L gene, host range and biochemical criteria, the 17 new discoveries and recently studies of complete genomes proposed to the genus Jeilongvirus could be divided into four lineages with some special characterizations (Fig. 4B, Table 1 and Table 2).
The L1 contained Miniopteran jeilongvirus discovered and isolated from Miniopterus schreibersii, together with discoveries from Hipposideros larvatus, Hipposideros armiger, Ia io, Scotophilus kuhlii, Rhinolophus sinicus, Rhinolophus affinis and Hipposideros pomona. All L1 had eight ORFs with the order 3 0 -N-P-M-F-SH-TM-G-L-5 0 consistent with Beilong jeilongvirus, Tailam jeilongvirus, Jun jeilongvirus and Myodes jeilongvirus. The SH and TM ORFs of above four Jeilongvirus species varied from 69 to 82 and 254 to 258 amino acids; 210 to 249 and 765 to 777 nucleotides, however the L1 were longer from 216 to 277 and 419 to 561 amino acids; 651 to 834 and 1260 to 1686 nucleotides. The RNA editing site for processing the V or W protein conserved in the P gene sequence of rodent-borne and feline-borne Jeilongvirus of the L2 was 'TTAAAAAAGGCA', but the L1 was completely replaced by 'TTAAAAAAACCA'. Araçatuba of Brazil that belonged to putative novel genus in 2021. All L3 and L4 had seven ORFs with the order 3 0 -N-P-M-F-X-G-L-5 0 consistent with Lophuromys jeilongvirus 2, Lophuromys jeilongvirus 1 and Feline paramyxovirus. The TM ORFs of the above three varied from 218 to 275 amino acids; 657 to 828 nucleotides, and the X gene between F and G encoded from 227 to 658 amino acids; 684 to 1977 nucleotides. The RNA editing site was conserved with the L2. The BtRpu-ParaV/YN2020H shared 42.9%-58.3% aa identities in N, 28.0%-58.9% in P, 42.3%-62.2% in M, 39.0%-56.6% in F, 13.1%-70.7% in X, 19.1%77.1% in G and 47.4%-63.4% in L.
Transmembrane helices of SH, TM and X
The SH, TM and ORFX of four lineages were used for the prediction of transmembrane helices (Supplementary Fig. S2). The lengths of SH, TM and X proteins, the locations of the putative transmembrane regions as well as the intracellular and extracellular regions were different. All had one predicted transmembrane helix except for BtRpu-ParaV/YN2020H of L3 which the X encoded a putative protein of 227 amino acids and had no transmembrane region.
Glycosylation site analysis
The potential N-linked and O-GalNAc glycosylation sites of F and G genes closely related to viral infection were predicted. The amino acid positions of glycosylation sites were listed in Supplementary Table S5. No O-GalNAc glycosylation sites of F genes in BtIi-ParaV/YN2020K, BtSk-ParaV/GX2019A and BtHp-ParaV/GD2016H were found.
Discussion
According to the estimate, at least 10,000 virus species can infect humans, but the majority are circulating imperceptibly in wildlife (Carlson et al., 2019;Olival et al., 2017). Some factors include climate changes and increased use of land, which promotes the spillover of zoonotic diseases, resulting in more viral interspecies transmission events from previously geographically located species of wildlife (Carlson et al., 2022;Hoberg and Brooks, 2015;Morales-Castilla et al., 2021). Bats account for most novel viral sharing events and are likely to share viruses that might promote EIDs in humans. Among bat-originated viruses, Hendra virus and Nipah virus under the family Paramyxoviridae are H. Su et al. Virologica Sinica 38 (2023) 198-207 known to infect humans and cause fatal diseases. In 2014, a novel rat henipavirus, Mojiang virus was detected and later confirmed the inability to interact with known paramyxoviral receptors in vitro (Rissanen et al., 2017;Wu et al., 2014). Recently, a hypothesized shrew-borne henipavirus named Langya henipavirus associated with a febrile human illness was found in the crowd and animals in contact, further reminds the surveillance of paramyxoviral pathogens need to be characterized . To obtain the background information of bat paramyxoviruses in China, bat samples across the mainland were collected and then applied for NGS based virome analysis, a large number of novel paramyxovirus-related sequence reads were found and assigned into the genus Jeilongvirus.
Because of limited discoveries, cognized with the genomeconstitution of Jeilongvirus containing eight ORFs with the order 3 0 -N-P-M-F-SH-TM-G-L-5 0 , and encoding the TM protein exclusively compared with other genera of Paramyxoviridae. However, novel discoveries proposed to this genus in recent years had new hosts including hedgehogs and cats, and some viruses contained only seven major ORFs.
In this study, we discovered 94 paramyxovirus sequences from 22 bat species in China and obtained 17 complete or nearly complete genomes. The results forcefully proposed that the genus Jeilongvirus could be divided into four lineages on the basis of phylogenetic analysis of the complete L gene, host range and biochemical criteria, containing new viruses identified in recent studies (de Souza et al., 2021;Sakaguchi et al., 2020;Vanmechelen et al., 2020). The L1 with eight ORFs and L4 with seven ORFs were bat monophyletic; the L2 was co-existed with eight or seven ORFs in rodents and felines; the new L3 with seven ORFs on the trend evolved in bats, hedgehogs and rodents. The Rhinolophus, Scotophilus, Miniopterus and Myotis were existed both in L1 and L3, so that the same genus of bats could contain the same virus of different genome structures.
The coding strategy and editing site (TTAAAAAAGGCA) of the Jeilongvirus P gene that played an important role in evading the host innate immune system are relatively conserved in the Henipavirus and Morbillivirus, encoding V or W protein by the addition of one or two single net nontemplated G residues (Jack et al., 2005). The P, V, and W proteins of Nipah virus, a highly lethal pathogen that had been studied well, all block the cellular response to interferon (IFN) by binding to and preventing the tyrosine phosphorylation of signal transducer and activator of transcription 1 (STAT1). The P gene products suppress both the production of and signaling by IFN. Both the V and W proteins block IFN regulatory factor 3-dependent gene expression, V as the major determinant of pathogenesis interacts with the cytoplasmic helicase melanoma differentiation-associated protein 5 (MDA5) and inhibits MDA5-dependent activation of the IFN-β promoter, and W modulates the inflammatory host immune response in a manner that determines the disease course (Ciancanelli et al., 2009;Satterfield et al., 2015). However, Cedar virus, which belonged to Henipavirus together with Nipah virus and Hendra virus, lacks the pathogenicity. The studies in hamsters, ferrets and guinea pigs confirmed virus replication and production of neutralizing antibodies, but clinical disease was not observed. The reasons that have been verified by experiments of inability to cause disease are receptor specificity to only ephrin B2, cannot suppress the type I IFN response, and the lack of V and W proteins Schountz et al., 2019). The corresponding nucleotide sequence of Cedar virus is 'TAAA-GATCAGGG'. The RNA editing site of L3 and L4 is conserved with L2 of Jeilongvirus, but the L1 only found in bats is all replaced by 'TTAAAAAAACCA', which may indicate that (1) something happens which results in the production of a new coding strategy; (2) the P gene of this bat-related paramyxovirus lacks RNA editing and coding capacity of the V or/and W protein, and influences the pathogenicity just like the Cedar virus. Further experiments are needed to determine the coding strategy, pathogenicity and invasive mechanism of these bat paramyxoviruses.
The infection studies of Beilong virus and J virus of Jeilongvirus indicate that they can inhibit STAT1 responses to IFN-α. Both the viruses encode V proteins, but the proteins lack interaction with STAT1/2 and antagonist function toward type I IFN signalling, suggesting that alternative proteins might function as IFN antagonists such as P, C, W, SH and TM (Audsley et al., 2016). Perhaps the functions of P, C and W are similar to the Henipavirus and Morbillivirus that had not been examined. The SH protein of J virus can inhibit tumor necrosis factor alpha (TNF-α) production and plays an essential role in blocking apoptosis and virulence. Although there is no sequence homology among SH proteins of mumps virus, respiratory syncytial virus and J virus, their functions are similar (Abraham et al., 2018). The TM protein of J virus is a type II integral membrane protein and required with F and G for efficient cell-to-cell fusion, but does not affect replication in tissue culture cells (Li et al., 2015). All SH, TM and X proteins of Jeilongvirus have one predicted transmembrane helix except for BtRpu-ParaV/YN2020H of L3 that has no transmembrane region indicated functional difference between X and TM. More attention should be focused on these novel paramyxoviruses with significant changes.
The host and region switches were tentatively established and the Bayesian factors were calculated based on available data. Combining the information of hosts and regions, the approximately evolutionary trend was revealed. The genus Jeilongvirus may have originated from Mus in Australia, then transmitted to bats and rodents in Africa, Asia and Europe, and finally to bats and rodents in America. Felines and hedgehogs were most likely the intermediate hosts from Scotophilus of bats rather than rodents. The bat colony was partial to interior, and rodent colony was exported based on this trend. However, it is important to note that: (1) new studies focused on Jeilongvirus genus were insufficient in host range and territorial scope; (2) most samples were tested by the consensus degenerate primers to obtain partial L sequences instead of complete genomes; (3) sampling imbalance in different hosts from the same region or different regions from the same host influenced the transmitting trend; (4) the accurate origin and transmission time were not pointed out, and many important switchintermediate-nodes were not found. Due to the limitations of current data, more extensive survey of animal paramyxoviruses in under sampling hosts and regions are needed to further investigate the accurate origin and evolutionary route.
Conclusions
In this study, we conducted systematic survey of bat paramyxoviruses in China. A total of 94 paramyxovirus sequences were identified from 22 bat species, and 17 complete or nearly complete genomes were obtained among them. The genus Jeilongvirus could be divided into four lineages with the evolution or predicted trend from Mus in Australia, to bats and rodents in Africa, Asia and Europe, and finally to bats and rodents in America. Although there is no evidence that bat jeilongvirus poses a threat to human health at present, the finding in this study is still helpful for us to understand the relationships between distribution and cross-species transmission of paramyxoviruses and the migration and co-roosting of their broad distributed host. Further experiments are needed to determine the pathogenicity and invasive mechanism of these jeilongviruses to bats or other related animal hosts.
Data availability
Datasets generated and analyzed during the current study are available in this published article (and its supplementary information files).
Ethics statement
Animals were treated according to the guidelines of the Regulations for the Administration of Laboratory Animals (Decree No. 2 of the State Science and Technology Commission of the People's Republic of China, 1988). Sampling procedures were approved by the Ethics Committee of the Institute of Pathogen Biology, Chinese Academy of Medical Sciences & Peking Union Medical College (Approval number: IPB EC20100415). | 2023-01-17T16:21:41.507Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "17f51fadbb637dc9828cc874390301da07898667",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.virs.2023.01.002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffd7570c10b1c072717b7d4fb90d38dc5842e747",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219766198 | pes2o/s2orc | v3-fos-license | Speed Estimation of Adaptive Stator-flux-vector-controlled Induction Motor Drive Based on Particle Swarm Optimization Algorithm
An adaptive synchronous speed estimation scheme is proposed for the speed estimation of a stator-flux-vector-controlled (SFVC) induction motor (IM) drive. An SFVC IM drive was established according to the current and flux of the stator, and the stator current was obtained from an IM by using Hall effect current sensors. A model reference adaptive system (MRAS) was utilized to design the synchronous speed identification scheme based on the reactive power, and the estimated rotor speed was obtained by subtracting the slip speed from the estimated synchronous speed. The adaptation mechanism of the MRAS was designed using the particle swarm optimization (PSO) algorithm. The available operation speed was extended to the constant-power mode by applying the field-weakening technique. The MATLAB\Simulink® toolbox was used to simulate this system, and all the control algorithms were realized using a Texas Instruments 6713-and-F2812 DSP card to generate pulse-width modulation signals for the power stage to actuate the IM. Both the simulation and experiment results (including the estimated rotor speed, stator current, electromagnetic torque, and stator flux locus) confirm the effectiveness of the proposed system and validate the proposed approach.
Introduction
Intelligent manufacturing technology requires numerous high-performance motors to actuate machine tools. Induction motors (IMs) are commonly adopted because of their robustness, few maintenance requirements, and suitability for use under hostile environmental conditions. However, the nonlinear coupling and time-varying mathematical models of an IM drive make its control more difficult than that of a DC motor drive. By applying the flux vector control (FVC) theory of IMs, (1) the complicated mathematical model of an IM can be converted into a flux-current component and torque-current component. Both components are orthogonal and can be separately controlled. This condition is analogous to a separately excited DC motor, and the maximum torque-to-current ratio can be attained. The FVC methods of an IM drive can be classified into rotor, stator, and air-gap types. In the rotor type, the stator current and rotor flux are selected as the state variables. In the stator type, the current and flux of the stator are selected as the state variables. In the air-gap type, the stator current and air-gap flux are selected as the state variables. The implementation of an FVC IM drive requires a rotor position sensor, such as an encoder, to detect the shaft position. This sensor, however, reduces the robustness of a motor and is unsuitable for hostile conditions. Hence, the development of speed estimation FVC IM drives in place of conventional FVC IM drives (rotor position sensor types) is required. In the literature, speed estimation methods for FVC IM drives have been presented: speed identification by an adaptive control system, (2)(3)(4) speed estimation by the application of a neural network or fuzzy logic control approach, (5,6) speed adjustment by flux estimation, (7)(8)(9)(10) and speed determination from an extended Kalman filter. (11)(12)(13) However, an adaptive control system easily traps a chattering effect with large control variables; a neural network or fuzzy logic control approach requires trial-and-error training procedures, iterative computations, a large amount of training data, network parameter assignment, and fuzzy rules; flux estimation requires the construction of an accurate plant model; and an extended Kalman filter requires a large amount of computation and memory. These requirements of the above methods will increase the design cycle and cost.
Variable-speed IM drives contain a constant-torque mode and the constant-power mode. In the constant-torque mode, the operation speed ranges from zero to the base speed, the flux command is set at the base value, and the available output power is proportional to the motor speed. In the constant-power mode, the operation speed ranges from the base speed to the maximum speed (two times the base speed), the flux command decreases with increasing motor speed, and the increase in the motor speed decreases the available torque. In this study, a stator-flux-vector-controlled (SFVC) IM drive was established according to the current and flux of the stator. A synchronous speed identification scheme was developed according to the model reference adaptive system (MRAS) theory based on the reactive power of an IM, and the adaptation mechanism of the MRAS was designed using the particle swarm optimization (PSO) algorithm. The rotor speed was estimated by subtracting the estimated slip speed from the estimated synchronous speed. The available operation speed range can be extended to the constant-power mode by applying the field-weakening technique. Hall effect current sensors were used to measure the IM stator current in the implementation of this speed estimation of adaptive SFVC IM drive. This paper has four sections. In Sect. 1, the research background, motivation, and a literature review of speed estimation methods for FVC IM drives are presented. The decoupled SFVC IM drive system used in this study is covered in Sect. 2. The MRAS synchronous speed identification scheme based on the reactive power and the adaptation mechanism of the MRAS designed using the PSO algorithm are described in Sect. 3. Simulations and experiments are discussed in Sect. 4.
SFVC IM Drive
The stator and rotor voltage vector equations of an IM in the synchronous reference coordinate frame from Ref. 14 where j is the imaginary unit, where ∧ stands for the estimated value, τ r = L r /R r is the rotor time constant, and s is the Laplace operator.
The second term on the right of Eq. (4) is the coupling component with relation to the q-axis stator current. Using this term, the feedforward compensation can be defined as Hence, the linear relationship between the estimated d-axis stator flux and the d-axis stator current can be derived as The generated electromagnetic torque of an IM under an SFVC condition is derived as where P denotes the number of motor poles. In Eq. (7), both the q-axis stator current and the estimated d-axis stator flux are orthogonal. The generated electromagnetic torque of an IM is dominated by the q-axis stator current, and the maximum torque-to-current ratio can be achieved. The mechanical equation of an IM is given by where T L is the load torque, B m is the viscous friction coefficient, J m is the inertia of the motor, and where e ds v ′ and e qs v ′ are the outputs of the d-axis and q-axis stator current controllers, respectively.
From Eqs. (9) and (6), and with the decoupling of Eq. (10), the plants of the d-axis and q-axis stator current control loops can be respectively obtained as Since the bandwidths of the inner stator current control loops are much higher than those of the flux control loop and speed control loop, the closed-loop gain of the stator current control loops can be regarded as a unit. (14) According to Eqs. (6) and (8), the plant of the flux control loop and the plant of the speed control loop are respectively given by A block diagram of IM's linear control under the SFVC condition is shown in Fig. 1. Here, the paired parameters (K ps , K is ), (K pf , K if ), (K pd , K id ), and (K pq , K iq ) are the proportional and integral (PI) gains of the speed controller, flux controller, d-axis stator-current controller, and q-axis stator-current controller, respectively.
Speed Estimation Scheme of SFVC IM Drive
In the speed estimation scheme of the SFVC IM drive, the feedback speed is replaced by a signal of the estimated speed, which is derived from the designed MRAS synchronous speed identification scheme based on the reactive power.
MRAS speed estimation scheme based on reactive power
In the proposed speed estimation SFVC IM drive, the estimated synchronous speed is derived from an MRAS speed estimation scheme based on the reactive power of an IM, and the estimated rotor speed is obtained by subtracting the slip speed from the estimated synchronous speed. This approach is guaranteed to realize the best performance of SFVC IM for speed estimation.
According Substituting Eqs. (17) and (18) According to MRAS theory, (15) Eq. (19) can be used as the reference model because it does not contain the estimated synchronous speed ˆe ω . Equation (20), which contains ˆe ω , can be used as the adjustable model. The difference between the reference model and the adjustable model is fed to an adaptation mechanism to identify the estimated synchronous speed ˆe ω , and the adaptation mechanism of the MRAS was designed using the PSO algorithm. The proposed MRAS synchronous speed identification scheme based on the reactive power is shown in Fig. 2.
Here, the current and voltage of the stator were obtained from an IM using isolation voltage sensors and Hall effect current sensors.
Using the MRAS synchronous speed identification scheme with the PSO algorithm adaptation mechanism and Eq.
PSO algorithm adaptation mechanism design
The PSO algorithm was used to design the adaptation mechanism of the MRAS synchronous speed identification scheme for the speed estimation SVFC IM drive because the algorithm is suitable for irregular and time-varying conditions. The PSO algorithm is a random searching algorithm based on swarm intelligence and imitates the foraging of a bird flock. (16) The original PSO algorithm has the convergence to local solutions, and some modified methods have been developed, such as the spider monkey, dynamic system tracking, inertia weight, and constriction factor algorithms. (17,18) In this system, the inertia weight method was used, which compared with other intelligent search methods, (19,20) has the advantages of few parameters, rapid convergence, and suitability for various conditions. The inertia weight PSO algorithm is a reiteration recurrent procedure. First, a group of particles are randomly produced, and the current fitness value of each particle is computed to determine whether it is better than the best fitness value of an individual particle. Then, the velocity and position of each particle are updated, and the new fitness value of each particle is also computed. The updated velocity and position formulas of the particle are where V i (k) and V i (k+1) are the current and next velocity of the particle, x i (k) and x i (k+1) are the current and next position of the particle, P best is the best position of the individual particle, G best is the best position of the particle swarm, w is the weighting factor, C 1 and C 2 are the learning factors of the individual particle and swarm, and rand is a uniformly distributed random variable over [0,1], respectively. Figure 3 shows the two-dimensional relationship between the velocity and position search spaces for a particle, and a flow chart of the proposed inertia weight PSO algorithm is shown in Fig. 4.
A block diagram of the proposed speed estimation of adaptive SFVC IM drive based on the inertia weight PSO algorithm is shown in Fig. 5. The system includes a speed controller, flux controller, q-axis and d-axis stator current controllers, d-axis flux decoupling, q-axis voltage decoupling, flux command calculation, d-axis flux estimation, slip speed estimation, coordinate transformation, and MRAS synchronous speed identification based on the inertia weight PSO algorithm. In this study, the root-locus method was used to design the PI-type controllers for the speed control loop, flux control loop, and d-axis and q-axis stator current control loops.
The proportional gain (K p ), integral gain (K i ), and bandwidth (B.W) for the four PI-type controllers are shown in Table 1. The root locus and Bode plot of the designed flux control loop are respectively shown in Figs. 6 and 7.
Simulation and Experiment
A standard three-phase, 220 V, 0.75 kW, Δ-connected, squirrel-cage IM was used in the experiments to confirm the effectiveness of the proposed speed estimation of adaptive SFVC IM drive based on the PSO algorithm. The IM parameters are listed in Table 2. In a running cycle, the sequence of speed commands is as follows: forward-direction acceleration from t = 0 s to t = 1 s; forward-direction steady-state operation during 1 ≤ t ≤ 4 s; forward-direction braking to reach zero speed in the interval 4 ≤ t ≤ 5 s; reverse-direction acceleration from t = 5 s to t = 6 s; reverse-direction steady-state operation during 6 ≤ t ≤ 9 s; reverse-direction braking to reach zero speed in the interval 9 ≤ t ≤ 10 s. The simulated and measured responses in the first running cycle are shown in Figs. 8-13. Each figure contains six responses: the estimated (a) (a) rotor speed, actual rotor speed, stator current, electromagnetic torque, estimated synchronous angle position, and stator flux locus. The simulated and measured responses with a 2 N-m load for reversible steady-state speed commands ±600 rpm, ±1200 rpm (in the constant-torque mode), and ±2200 rpm (in the constant-power mode) are shown in Figs. 8 and 9, 10 and 11, and 12 and 13, respectively. In this study, the MRAS synchronous speed identification scheme was also designed using the conventional PI-type adaptation mechanism, and the simulated and measured responses with 2 N-m load for the steady-state speed command ±1200 rpm are respectively shown in Figs. 14 and 15.
The percentage errors of the estimated rotor speed for the simulated and measured responses using the proposed speed estimation of adaptive SFVC IM drive scheme are respectively shown in Figs. 16 and 17. From the simulated and measured results for different operation conditions shown in Figs. 8-13, the rotor speed was accurately estimated. In addition, better responses of the stator current and electromagnetic torque were attained, and the estimated synchronous angle position and the circular shape of the estimated stator flux locus verified the exactness of the coordinate transformation between synchronous and stationary frames. These results show that the desired performance can be achieved using the proposed speed estimation of adaptive SFVC IM drive based on the PSO algorithm. Comparing Figs. 10 and 11 with Figs. 14 and 15, it can be concluded that the adaptation mechanism with the PSO algorithm is better than the conventional one with PI-type mechanism. According to Figs. 16 and 17, for the simulation and measurement responses, the estimation percentage errors between the actual and estimated speeds are approximately 0.4 and 0.8 %, respectively.
Conclusions
In this study, an adaptive synchronous speed on-line estimation scheme based on the inertia weight PSO algorithm was proposed for the speed estimation of an SFVC IM drive. The MRAS synchronous speed estimation scheme was established on the basis of the reactive power of an IM, and the estimated rotor speed was acquired by subtracting the estimated slip speed from the estimated synchronous speed. The adaptation mechanism of MRAS was designed using the inertia weight PSO algorithm. The stator current signal measurement carried out to implement this adaptive speed estimation SFVC IM drive is provided by Hall effect current sensors. The operation speed can be extended to the constant-power mode using the field-weakening technique. Both the simulation and experiment results (including the estimated rotor speed, stator current, electromagnetic torque, estimated synchronous angle position, and stator flux locus) confirmed that superior performance was achieved in terms of acceleration, steady-state operation, and braking operation at different reversal speeds. | 2020-06-04T09:05:46.759Z | 2020-05-31T00:00:00.000 | {
"year": 2020,
"sha1": "76a3dc80f4e78c0b92c1725956becea71484a681",
"oa_license": "CCBY",
"oa_url": "https://myukk.org/SM2017/sm_pdf/SM2223.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c8aed2a272c23f4783ba9f00c79f7ef86b366f04",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
61795531 | pes2o/s2orc | v3-fos-license | Eye Tracking Scanpath Analysis Techniques on Web Pages: A Survey, Evaluation and Comparison
Eye tracking has commonly been used to investigate how users interact with web pages, with the goal of improving their usability. This article comprehensively revisits the techniques that could be applicable to eye tracking data for analysing user scanpaths on web pages. It also uses a third-party eye tracking study to compare these techniques. This allows researchers to recognise existing techniques for their goals, understand how they work and know their strengths and limitations so that they can make an efficient choice for their studies. These techniques can mainly be used for calculating similarities/dissimilarities between scanpaths, computing transition probabilities between web page elements, detecting patterns in scanpaths and identifying common scanpaths. The scanpath analysis techniques are classified into four groups by their goals so that researchers can directly focus on the appropriate techniques for a sequential analysis of user scanpaths on web pages. This article also suggests dealing with the limitations of these techniques by pre-processing eye tracking data, considering cognitive processing and addressing their reductionist approach.
Introduction
Web pages are typically made up with visual elements such as menus, headers and footers.These elements allow users to complete their tasks.For example, users can navigate within web pages by using the menus.In order to investigate how users interact with these visual elements, many researchers (i.e., academic/usability researchers and usability evaluators) prefer to conduct eye tracking studies.These studies reveal which visual elements are fixated and which paths are followed (Holsanova, Rahm, & Holmqvist, 2006;Yesilada, Jay, Stevens, & Harper, 2008;Albanesi, Gatti, Porta, & Ravarelli, 2011;Hejmady & Narayanan, 2012;Eraslan, Yesilada, & Harper, 2014).For example, Gossen, H öbel, and N ürnberger (2014) conducted an eye tracking study and investigated how children interact with search engines.Their findings illustrate that children typically experience difficulties in estimating the relevancy of a search result.Therefore, they suggest that search engines should be improved to support children to find the most relevant results.
Eye tracking studies supplement other usability methods, especially the Retrospective Think Aloud method where users are asked to verbalise their performance after they complete their tasks (Guan, Lee, Cuddihy, & Ramey, 2006).A study conducted by Guan et al. (2006) illustrates that when users encounter difficulties in completing their tasks, they verbalise their performance at a very abstract level.Hence, when users are asked to complete more complicated tasks, scanpath analysis becomes more crucial to understand their real performance.Besides this, scanpath analysis is likely to be more valuable for exploratory tasks in comparison with goal-directed tasks.For goaldirected tasks, various metrics can be used, such as the task completion time.However, there is no specific goal in exploratory tasks, thus researchers can benefit from scanpath analysis to understand how users explore web pages.Groner, Siegenthaler, Raess, Wurtz, and Bergamin (2009) propose a multifunctional usability analysis approach that consists of eye gaze analysis, verbal reports, log file analysis, retrospective in- terviews and performance characteristics (such as failure and success).They applied their approach to an eLearning module of the Moodle learning management system1 to investigate how users interact with the module.Their findings suggest that users experience difficulties in navigating the module because of a large amount of visual information on a page.To improve the navigation, they suggest to include less information on the start page and provide a table of contents that gives a direct access to other parts.
When users read web pages, their eyes become relatively stable at certain points which are referred to as fixations.A series of fixations represent their scanpaths on the web pages.Figure 1 shows an example of a scanpath of a particular user on the HCW Travel web page which is segmented into its visual elements (Brown, Jay, & Harper, 2012;Akpinar & Yes ¸ilada, 2013).As can be seen from this figure, fixations are illustrated with circles where larger circles are used for longer fixations.The user here fixated the visual elements B, D, C and E respectively.Therefore, the scanpath is represented as BDCE.
The scanpath theory of Noton andStark (1971a, 1971b) suggests that a user establishes his or her own scanpath on the first visit to a visual stimulus and then follows the same scanpath, with some variations, on the following visits to the visual stimulus.It also suggests that the scanpaths are not similar between different users on a particular visual stimulus, and between different visual stimuli for a particular user.
As web pages are repeatedly visited visual stimuli, both Josephson and Holmes (2002) and Burmester and Mast (2010) tested this theory with web pages.However, they recognised that the scanpath theory could not be fully supported on web pages.In particular, they recognised that the users typically followed various scanpaths on a particular web page instead of a single scanpath as suggested by the scanpath theory.Josephson and Holmes (2002) also recognised many cases where the most similar scanpaths on a particular web page were from different users instead of the same user.The scanpaths can also be affected by user tasks and different individual factors, such as a gender and user expertise (Eraslan & Yesilada, 2015;Underwood, Humphrey, & Foulsham, 2008).
A number of techniques have been suggested in the literature to visualise user scanpaths for analysing them in an exploratory and qualitative way (Räihä, Aula, Majaranta, Rantala, & Koivunen, 2005;Blascheck et al., 2014).These techniques have already been comprehensively reviewed by Blascheck et al. (2014).Apart from the scanpath visualisation techniques, there are also different techniques that could be applicable to eye tracking data for analysing user scanpaths, which are correlated with visual elements of web pages, in a more detailed way.These techniques can typically be used for calculating a similarity/dissimilarity between a pair of scanpaths, computing transition probabili-Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages ties between visual elements, detecting patterns within given scanpaths and identifying a common scanpath for multiple scanpaths.To make the best use of available data, researchers should select an appropriate technique for their studies.At this point, it is crucial for them to know the strengths and limitations of these techniques.This article, therefore, initially explains how these techniques work.It then provides an analysis and critical evaluation of their strengths and weaknesses supported by data from an eye tracking study.
Although there are several review articles in this field, they mainly focus on a specific set of techniques which can be used for a particular objective, for example, techniques to compare two scanpaths (Le Meur & Baccino, 2013;Anderson, Anderson, Kingstone, & Bischof, 2014).Additionally, some of these techniques are summarised in the related work sections of existing publications (Duchowski et al., 2010;Mast & Burmester, 2011).Furthermore, Holmqvist et al. (2011) published a book on eye tracking methodologies which also introduced some of these techniques.
To the best of the authors' knowledge, this article is the most comprehensive review and analysis of the techniques which can be used to compare and correlate (i.e., computing transition probabilities between visual elements, finding patterns, identifying common scanpaths, etc.) not only two scanpaths but also more than two scanpaths.It makes a contribution to eye tracking research on the web by guiding researchers to choose an appropriate technique and revealing some directions to address the limitations.
In order to investigate both the strengths and limitations of these techniques, we evaluated them with an eye tracking dataset from a study conducted with twelve users by Brown et al. (2012).We then criticised the techniques based on the results, this meant we used a data-driven approach to investigate, compare and contrast the techniques.
Scanpath analysis is relevant to all studies with the aim of analysing sequential patterns on visual stimuli.Specifically, it can be used for investigating the differences between the sequential patterns of different user groups on web pages, such as male and female groups (Eraslan & Yesilada, 2015).In addition, it can be conducted for recognising the search efficiency of users.For example, longer scanpaths can be interpreted as less efficient searching (Ehmke & Wilson, 2007).Scanpaths can also be analysed to identify common sequential patterns that can be used for different objectives.In particular, common patterns can be a guide to re-engineering web pages to make them more accessible on small screen devices by allowing users to directly access firstly visited visual elements without a lot of scrolling and zooming (Akpınar & Yes ¸ilada, 2015).
The remainder of this article firstly explains our methodology to evaluate the scanpath analysis techniques, secondly revisits them along with their strengths and limitations based on our evaluation, and finally discusses and criticises the techniques to provide some directions to address their limitations.
Methodology
In order to investigate both the strengths and limitations of the scanpath analysis techniques on web pages, we decided to evaluate them with a third-party eye tracking dataset.In other words, we decided to use a dataset that was not previously used to evaluate any of these techniques.In addition, the data was not originally collected for this purpose.Therefore, in this article, we re-evaluated the techniques with the same dataset.This made the evaluation more objective to compare and contrast the techniques.
We unfortunately could not evaluate three of these techniques as highlighted in Table 2. ScanMatch technique works with a grid-layout page segmentation by default (see Figure 2) (Cristino, Math ôt, Theeuwes, & Gilchrist, 2010).It also allows to apply another type of segmentation by associating each pixel to a particular segment.However, there may be some spaces between segments (see Figure 1).In other words, some pixels may not be associated with a particular segment.Because of this limitation of ScanMatch technique, it could not be applied to the dataset.Besides this, the T-Pattern Detection technique is not publicly available, and therefore it could not be applied to the dataset (Magnusson, 2000).As the Multiple Sequence Alignment technique is described at a very abstract level with the lack of details, it could also not be applied the dataset (Hembrooke, Feusner, & Gay, 2006).However, we still analyse these techniques based on their given descriptions.
Dataset
As also stated by Shen and Zhao (2014), there is no publicly available eye tracking dataset on real web pages.Although we also asked some other researchers in related fields2 whether they have eye tracking datasets to share with us, we could not find any appropriate dataset.Fortunately, we have an eye tracking dataset from a study conducted by Brown et al. (2012) in March 2010.They are members of our Interaction Analysis and Modelling Lab at the University of Manchester.This study aimed to investigate how users interact with dynamic content on web pages.In this study, the participants sat in front of a 17" monitor with a built-in Tobii x50 eye tracker and the screen resolution of 1280 x 1024.The HCW Travel web page (see Figure 1) was shown to the participants and their eye movements were recorded.
The participants were asked to read the latest news from the HCW Travel Company and then click on the link for the special offers.This meant they required to Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages fixate certain visual elements on the web page in a particular order.Specifically, they needed to fixate the element E that includes the latest news, and then fixate the element D that contains the link to see the special offers.Since the latest news were shown next to the Latest News title and the link for the special offers was labelled as Special Offers, the participants could find the related visual elements by only scanning the web page.
Twelve people participated in the eye tracking study.These were students and staff at the University of Manchester ranging between the ages of 18 and 45.We noticed some problems with the results of the eye tracking recordings for two participants as they were distracted, and therefore we had to eliminate their data from our evaluation process.Although the sample size is small in this eye tracking study, it is still good enough in illustrating the strengths and weaknesses of the scanpath analysis techniques.Having small dataset is even better in clearly explaining how these techniques work and comparing them.
Visual Elements
In our evaluation, we used the extended and improved version of the Vision Based Page Segmentation (VIPS) algorithm3 to segment the HCW Travel web page into its visual elements because it automatically discovers visual elements and correlates them with the underlying source code which is important for further processing of web pages (Akpinar & Yes ¸ilada, 2013).In particular, when scanpaths are correlated with these visual elements, they can then be used for the purpose of re-engineering of web pages (Yesilada, Harper, & Eraslan, 2013).
The VIPS algorithm segments web pages based on the selected segmentation level where smaller visual elements are identified with higher levels.As the 5th level was determined as the most successful level with approximately 74% user satisfaction, we used the 5th level for our evaluation (Akpinar & Yes ¸ilada, 2013).
User Scanpaths in Terms of Visual Elements
Once the visual elements were discovered, we exported the eye tracking data of the ten users and correlated their fixations with the visual elements to construct their individual scanpaths in terms of the visual elements.To achieve this, we used the width, height, x and y coordinates of the visual elements and the x and y coordinates of the fixations.We then simplified the individual scanpaths by abstracting consecutive repetitions as stated in the literature (Brandt & Stark, 1997;Jarodzka, Holmqvist, & Nystr öm, 2010).For example, AABBBCC becomes ABC after the abstraction.
These ten individual scanpaths on the HCW Travel web page are listed in Table 1 (Yesilada et al., 2013;Eraslan, Yes ¸ilada, & Harper, 2013).As can be seen from the table, the participants followed slightly different paths to complete their tasks.For instance, the third and fourth participants fixated more visual elements to complete their tasks in comparison with the participants six and eight (Yesilada et al., 2013).When the individual scanpaths were ready, we evaluated the scanpath analysis techniques with them.The following section revisits the techniques along with their strengths and limitations based on our evaluation.
Scanpath Analysis Techniques
In this article, we classify the scanpath analysis techniques into four main groups according to their goals.These groups are as follows: (1) Similarity/Dissimilarity Calculation, (2) Transition Probability Calculation, (3) Pattern Detection and (4) Common Scanpath Identification.Table 2 shows an overview of this classification.Specifically, the table represents the groups along with their techniques.For example, it represents that eMINE scanpath algorithm belongs to the group of common scanpath identification (Eraslan et al., 2014).The techniques within the same group mainly have the same goal but not necessary to have the same analysis approach.In particular, in the common scanpath identification group, one approach suggests to apply a hierarchical clustering with the Dotplots algorithm (Goldberg & Helfman, 2010) whereas another approach (eMINE scanpath algorithm) suggests to use the String-edit algorithm and the Longest Common Subsequence technique together for a hierarchical clustering (Eraslan et al., 2014).In addition, Table 2 shows the main requirements for each technique to be able to run them.For example, eMINE scanpath algorithm only requires a number of scanpaths that are represented in terms of visual elements.Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages for more than two scanpaths, except of the techniques from the similarity/dissimilarity calculation group.In that group, the techniques work in a pairwise manner which means they can work with only two scanpaths at the same time.Moreover, Table 2 illustrates if the techniques consider fixation durations and the positions of visual elements on web pages.Most of the techniques tend to ignore fixation durations while analysing scanpaths.However, it is widely accepted that fixation duration is associated with the depth of processing and the ease or difficulty of information processing (Velichkovsky, Rothert, Kopf, Dornh öfer, & Joos, 2002;Follet, Meur, & Baccino, 2011).Furthermore, they usually do not consider the positions of visual elements on web pages.However, eye movement lengths are shorter between close visual elements in comparison with the visual elements which are distant from each other.
There are also a number of techniques with a reductionist approach.In this context, we refer to the reductionism as an oversimplification of multiple scanpaths with the loss of some important information.Thus, the reductionism is associated with detecting patterns and identifying common scanpaths.We articulate the reductionism as follows: (1) When an algorithm is likely to lose a shared visual element because of its position in individual scanpaths, it is classified as reductionist.
(2) When an algorithm is intolerant of small deviations within individual scanpaths (especially, ignoring the visual element fixated by the majority), it is also classified as reductionist.
This section revisits and investigates all of these techniques in depth based on our evaluation.
Similarity/Dissimilarity Calculation
A number of techniques are available to compare two scanpaths to determine a similarity or dissimilarity between two scanpaths.These techniques are as follows: the String-edit algorithm (Heminghous & Duchowski, 2006), the String-edit algorithm with a substitution matrix (Takeuchi & Habuchi, 2007), and Scan-Match technique (Cristino et al., 2010).As these techniques do not focus on generating common scanpaths, the reductionism is not applicable for this group.
String-edit Algorithm.The Levenshtein Distance algorithm, which is commonly known as the String-edit algorithm, has been widely used for comparing a pair of scanpaths represented in a string format (Privitera & Stark, 2000;Josephson & Holmes, 2002;Pan et al., 2004;Heminghous & Duchowski, 2006;Underwood et al., 2008;Duchowski et al., 2010;Eraslan et al., 2014;Eraslan & Yesilada, 2015).When user scanpaths are correlated with visual elements of web pages, they are represented in a string format.Therefore, this algorithm can be applied to calculate the distance (i.e., dissimilarity) between two scanpaths by transforming one of them to another with a minimum number of editing operations which are referred to as insertion, deletion and substitution.The minimum number of operations represent the distance between the scanpaths.Albeit the String-edit algorithm is designed to compare a pair of scanpaths, it can be applied to more than two scanpaths in a pairwise manner.Therefore, the most similar scanpaths to a particular scanpath can be identified.
Equation 1 mathematically formalises how to calculate the similarity between a pair of scanpaths as a percentage by using their String-edit distance (Underwood et al., 2008).First of all, the distance (d) is divided by the length of the longer scanpath (n) to calculate a normalised score for preventing any possible inconsistencies that can be caused by different lengths.The normalised score is then subtracted from one and finally multiplied by 100.
Table 3 illustrates how the String-edit algorithm works with the fifth and seventh scanpaths in Table 1 and aligns them as an illustration.As seen from the example, 8 operations are required in total (1 insertion/deletion + 7 substitutions) to transform one to another.The distance therefore between these scanpaths is calculated as 8 by this algorithm.
When the String-edit algorithm is applied to the scanpaths in Table 1 with a pairwise manner, the matrix shown in Table 4 is created which illustrates the distances between the scanpaths.According to this matrix, the most similar scanpaths are the seventh and ninth scanpaths because their distance (4) is the lowest in comparison to others.
Table 4
The String-edit distances between the scanpaths in Table 1 As mentioned above, the similarity between two scanpaths based on the String-edit distance can be calculated as a percentage.For example, the distance is calculated as 8 between the two scanpaths in Table 3.As the length of the longer scanpath is equal to 17, the distance is firstly divided by 17, and therefore the normalised score is calculated.When this score is subtracted from one and then multiplied by 100, the similarity between the scanpaths is calculated as 52.94%.
Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages Table 3 The String-edit algorithm applied to the fifth and seventh scanpaths in Table 1 Even though the String-edit algorithm has been widely used and it can easily be applied to scanpaths, the algorithm has some important drawbacks.In particular, the algorithm does not consider fixation durations while it is calculating a distance between two scanpaths.Besides this, the algorithm does not consider the positions of visual elements on a web page.For example, the cost of substituting the element B with the element E is not different from the cost of substituting the element B with the element G on the HCW Travel web page.However, as can be seen from Figure 1, the element B and the element E are very close to each other whereas there are five different elements between the element B and the element G.It means that the eye movement between the element B and the element E is shorter than the eye movement between the element B and the element G.
String-edit Algorithm with a Substitution Matrix.
By default, the cost of all the operations used by the String-edit algorithm is equal to one.However, the substitution costs between visual elements may not be the same because they may be different in size and the geometrical distances between them can also vary.In other words, the substitution cost should be lower for closer visual elements because eye movements between those visual elements are shorter.To counteract with this, a number of different approaches have been suggested in the literature (Josephson & Holmes, 2002;Takeuchi & Habuchi, 2007).In particular, Takeuchi and Habuchi (2007) propose to use an Euclidean Distance or a City Block Distance to construct a substitution cost matrix.Equation 2 below illustrates the Euclidean Distance formula and Equation 3 shows the City Block Distance formula to calculate a substitution cost between two visual elements U and V where U 1 and U 2 are x and y coordinates of the centre of the visual element U and α is a type of normalisation parameter (Takeuchi & Habuchi, 2007).Takeuchi and Habuchi (2007) take this normalisation parameter as 0.001.The substitution costs between visual elements are calculated in a pairwise manner and then stored in a matrix.The substitution cost matrix can then be used with the String-edit algorithm. (2) When the Euclidean Distance is used to construct a substitution matrix for the HCW Travel web page, the matrix shown in Table 6 is constructed.The matrix can then be used with the String-edit algorithm to calculate a distance between a pair of scanpaths on the HCW Travel web page by minimising the cost.Therefore, as illustrated in Table 5, the distance (namely, the total operation cost) between the fifth and seventh scanpaths in Table 1 is calculated as 1.96.Albeit this version of the String-edit algorithm considers the positions of visual elements on a web page while it is determining a distance between a pair of scanpaths, it still does not consider fixation durations.
As stated above, the String-edit algorithm has been widely used in the literature.In particular, Heminghous and Duchowski (2006) developed an application with the String-edit algorithm called iComp.This application segments an image into its areas of interest (AoIs) by using fixation distribution over the image as suggested by Santella and DeCarlo (2004).Once the scanpaths are represented in terms of the AoIs, the application applies the String-edit algorithm to compare the scanpaths.Instead of automatic AoI detection, the evaluators and the users can also identify AoIs according to the evaluation goals or research questions (such as Holsanova et al. (2006)).Josephson and Holmes (2002) also used the String-edit algorithm to organise scanpaths into smaller groups based on their similarities between each other.Furthermore, Underwood et al. (2008) Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages Table 5 The String-edit algorithm applied to the fifth and seventh scanpaths in Table 1 with a substitution matrix that is shown in Table 6 [ ScanMatch.Instead of calculating the distance between two scanpaths, Cristino et al. (2010) use the Needleman and Wunsch algorithm to directly calculate the similarity between two scanpaths by using a substitution cost matrix and a gap penalty.They call their approach ScanMatch4 .In this approach, the substitution costs are inversely related to the Euclidian distance where the lowest cost is assigned to a pair of visual elements that are the farthest from each other.In addition, there is a threshold value that represents the cut-off point for determining whether the substitution cost is positive or negative.The threshold value can be adjusted to ensure that the alignment is only applied to visual elements within the variability of the saccade amplitudes.The gap penalty can also be changed.Instead of using the substitution matrix generated by ScanMatch technique, a different type of a substitution matrix can also be introduced by a researcher.
The scanpath analysis techniques typically do not take fixation duration into consideration.
Thus, Cristino et al. (2010) suggest repeating elements in individual scanpaths based on their fixation durations.To achieve this, an appropriate duration (namely, temporal bin size) should be defined for repeating these elements proportionally to the fixation durations.For example, if the duration is defined as 50 milliseconds and the visual element C is fixated for 200 milliseconds by a user, his or her scanpath will include four (200/50=4) consecutive visual element C (...CCCC...).Takeuchi and Matsuda (2012) tested this approach with an eye tracking study by using the String-edit algorithm and a substitution matrix.They suggest that better results can be achieved by taking this approach into account for scanpath comparison.
ScanMatch technique is mainly designed for analysing user scanpaths on visual stimuli segmented by a grid-layout.Figure 2 shows an example of a 5x5 grid-layout segmentation with ScanMatch technique where each element is represented with one upper-case letter and one lower-case letter.The grid size can be adjusted and then user scanpaths can be represented with the segments.ScanMatch technique also allows to use a different segmentation but each pixel should be associated with a visual element.As there were some spaces between visual elements generated by the extended and improved version of the VIPS 2010) where each element is represented with one upper-case letter and one lower-case letter algorithm (see Figure 1), ScanMatch technique could not be applied the dataset that is described in the Methodology section.
Both the durations of fixations and the positions of visual elements on web pages are considered here.However, the subjectivity level of the results can be an important issue here as there are many parameters that need to be configured.The configurations of those parameters can easily affect the results.
Transition Probability Calculation
Markov Models (West, Haake, Rozanski, & Karn, 2006) and eSeeTrack technique (Tsang, Tory, & Swindells, 2010) are categorised under the transition probability calculation group as they determine transition probabilities between visual elements.The reductionism is not again applicable to this group.
Markov models.
In order to calculate transition probabilities between visual elements, Markov models have been used with some variations (West et al., 2006;Chuk, Chan, & Hsiao, 2014;Kang & Landry, 2015).These models can be applied to user scanpaths correlated with visual elements of web pages to generate a transition matrix which holds transition probabilities between visual elements.This matrix can then be used to recognise which visual element can be next and can be before a particular element with their probabilities.
Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages Table 7 shows a transition matrix generated for the scanpaths in Table 1 by using the scanpath analysis tool of West et al. (2006) called eyePatterns5 (Yesilada et al., 2013;Eraslan et al., 2013).This matrix includes a positive integer number and two percentages in each cell.The number illustrates the number of transitions from a visual element in a row to a visual element in a column.In addition, the percentages show row and column probabilities respectively where the row probabilities are related to the next visual elements, and the column probabilities are associated with the previous visual elements.For example, as highlighted in Table 7, there are 11 transitions from the visual element A to the visual element C in total, and the transition probability from element A to element C is calculated as 55.01%.Moreover, the probability of fixating element A just before element C is calculated as 23.92%.As also stated in the literature, Markov models are incapable of identifying whether or not there is a typical scanpath for multiple scanpaths (Abbott & Hrycak, 1990;Josephson, 2010).For example, it could be assumed that the starting point is the visual element C for the scanpaths in Table 1 as it is firstly fixated by most of the users.According to the transition matrix in Table 7, users are more likely to fixate the visual element D after the visual element C.They are then more likely to fixate the visual element E and then the visual element D again.It continues as CDEDED..., and therefore a number of considerable questions arise, especially what the ending point should be and which probabilities should be used.Furthermore, the durations of fixations and the positions of visual elements on web pages are not used while creating the transition matrix.
eSeeTrack.There is another analysis tool called eS-eeTrack which visualises eye tracking data based on the segments of visual stimuli by using a timeline and a tree visualisation (Tsang et al., 2010).The timeline illustrates a sequence of fixations based on visual elements for each user.Each fixation is represented as a coloured band, and the width of the band represents the duration.As a result, the long fixations can be recognised in the timeline.Moreover, the tree visualisation allows recognition of transitions between segments for multiple users where higher probabilities are highlighted with larger sizes.An example of the tree visualisation is illustrated in Figure 3.Even though fixation durations are considered by eSeeTrack, the positions of visual elements on visual stimuli are not taken into consideration.Similar to Markov models, eSeeTrack is not able to identify whether or not there is a typical scanpath for multiple scanpaths.
Instead of calculating transition probabilities between visual elements of web pages, some other techniques have also been suggested in the literature to detect patterns within multiple scanpaths.These techniques are revisited and investigated in the following section.
Pattern Detection
The pattern detection techniques range from searching for a particular pattern to detecting all patterns with the number of matches.This group consists of eyePatterns analysis tool (West et al., 2006), the Sequential Pattern Mining algorithm (Hejmady & Narayanan, 2012) and the T-Pattern Detection technique (Magnusson, 2000).eyePatterns -Search Patterns.When people want to check whether a particular pattern exists within given scanpaths or not, they can use eyePatterns analysis tool (West et al., 2006).For example, on the HCW Travel web page, the participants were asked to read the latest news from the company and click on the link Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages for the special offers.They, therefore, needed to fixate the visual elements E and D respectively to complete their tasks successfully.When the pattern ED is searched in their scanpaths, the analysis tool provides the results shown in Table 8.According to these findings, the pattern ED is not seen in all of the scanpaths.However, as these participants completed their tasks successfully, it is expected to see this pattern in their scanpaths.The participants might not complete their tasks directly, so there could be other visual elements between the visual elements E and D. Hence, this analysis tool also has an option (namely, gap size) to make the search more flexible by allowing other visual elements between the desired visual elements (maximum five elements), such as allowing to find the pattern ED in the scanpath CACDECECD (S8 in Table 1).
While eyePatterns analysis tool is searching for sequential patterns in given scanpaths, it does not check the durations of fixations and the positions of visual elements on web pages.Moreover, if there are more than five elements between the desired two elements, the two elements cannot be combined to be detected as a pattern.
eyePatterns -Discover Patterns.eyePatterns analysis tool can also be used to discover patterns within multiple scanpaths based on the defined pattern length (West et al., 2006).When it is applied to a number of scanpaths with a particular length, it lists the pat-terns with how many times they are seen in the scanpaths and how many scanpaths are inclusive of the patterns.Hence, when this tool is applied to the scanpaths on the HCW Travel web page with the default length 4, the discovered patterns are listed as shown in Table 9.For example, the pattern EDED is seen ten times but in four out of ten scanpaths.This tool does not have a tolerance for extra visual elements within patterns while discovering them.It means it cannot discover the pattern EDED in the scanpath BCECDCDCDEDECDC because of the visual element C. Because of this reason, this tool is reductionist while discovering patterns.In other words, it is likely to detect no pattern or very short patterns that are not helpful for understanding users' behaviours on web pages.In addition, this tool does not consider the durations of fixations and the positions of visual elements on web pages during the discovery of patterns.
Sequential Pattern Mining. The Sequential Pattern
Mining (SPAM) algorithm has also been used to identify patterns within multiple scanpaths (Hejmady & Narayanan, 2012).Although this algorithm was originally developed for detecting frequent patterns in a sequence database (Ayres, Flannick, Gehrke, & Yiu, 2002), it can also be applied to user scanpaths correlated with visual elements of web pages.In contrast to eyePatterns analysis tool, the SPAM algorithm has toler-Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages ance to extra visual elements within patterns while discovering them.To find the patterns that are included in all the scanpaths, the minsup parameter, the percentage of scanpaths that include the pattern, should be set to one (or 100%) (Fournier-Viger et al., 2014).
When the SPAM algorithm is applied to the scanpaths in Table 1 to detect patterns that are seen in all the scanpaths, it finds CDED and DCED as the longest patterns (Fournier-Viger et al., 2014).In contrast, as seen in Table 9, eyePatterns analysis tool cannot detect any pattern with the length four which exists in all the scanpaths.Similar to eyePatterns analysis tool, the SPAM algorithm does not pay attention to the durations of fixations and the positions of visual elements on web pages.This algorithm has also a reductionist approach.Specifically, when the individual scanpaths VWXYZ, VWYZ and VXWZY are available, the patterns VWY and VWZ are identified as the longest patterns which are seen all the scanpaths.However, the elements V, W, Y and Z exist in all the scanpaths.
T-Pattern Detection.T-Pattern Detection, which stands for Temporal Pattern Detection, is another approach that has been used to detect patterns within user scanpaths (Burmester & Mast, 2010;Mast & Burmester, 2011;Drusch & Bastien, 2012).It was originally developed by Magnusson (2000) in the area of behavioural science for analysing social interaction but now it can be used in different areas (Mast & Burmester, 2011).For example, this approach was used by Borrie, Jonsson, and Magnusson (2002) to analyse the movements of the ball and the players in some soccer matches.As the T-Pattern Detection technique is now a commercial product6 , researchers need to pay for using it in their studies.
T-Pattern detection requires a behaviour sequence which is coded in terms of the occurrences of event types with their times (Magnusson, 2000).The event type represents the beginning or ending of some particular behaviour such as starting to fixate the visual element A (Magnusson, 2000).As also stated by Burmester and Mast (2010) and Mast and Burmester (2011), two event types are defined as a T-Pattern if they meet the following two conditions: 1.Both of the two event types appear more than once in the behaviour sequence in the same order.
2. Both of the two event types appear invariantly over time.
According to Magnusson (2000), there are two possible types of distribution which are called Critical Intervals: Fast and Free Critical Intervals.As also stated by Burmester and Mast (2010) and Mast and Burmester (2011), for the Fast Critical Interval type, the event type A should occur relatively quickly before the event type B. In contrast, for the Free Critical Interval type, the event type A can occur before the event type B within a (Magnusson, 2000;Mast & Burmester, 2011).A T-Pattern with n components can be represented as follows: X 1 [d1, d2] 1 X 2 [d1, d2] 2 ... X i [d1, d2] i X i+1 ... X n where [d1, d2] represents the critical interval (Magnusson, 2000).
This technique uses the significance level parameter while generating T-Patterns (Magnusson, 2000).This parameter is related with critical intervals and it influences the number of event types in T-Patterns (Magnusson, 2000).When the significance level decreases, less and shorter patterns are detected (Magnusson, 2000).The T-Patterns can also be filtered by using various criteria such as the minimum pattern length, the minimum number of occurrences of the pattern (Magnusson, 2000).
The T-Pattern Detection technique has many different parameters, and the detected patterns can be affected based on the adjustments of these parameters.As a consequence, the subjectivity level of the results can be a problem.By using strict values, the technique can also become reductionist, especially with the Fast Critical Intervals.As illustrated in Figure 4, the pattern AB may not be detected as a T-Pattern because of the Fast Critical interval.Likewise to the majority of the scanpath analysis techniques (see Table 2), the T-Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages Pattern Detection technique does not consider the positions of visual elements on visual stimuli.However, the durations of fixations are used for detecting T-Patterns.
Common Scanpath Identification
As presented above, different techniques have been used to detect patterns within user scanpaths.These techniques can detect more than one pattern for given scanpaths.For example, the SPAM algorithm provides CDED and DCED as the longest patterns for the scanpaths in Table 1.In contrast to these techniques, different techniques are also available to identify one scanpath for representing the entire group which is typically known as a common scanpath.This group includes the following techniques: the Shortest Common Supersequence technique (Räihä, 2010), the Multiple Sequence Alignment technique (Hembrooke et al., 2006), the Position-based Weighted Models of Sutcliffe and Namoun (2012), the Position-based Weighted Models of Sutcliffe and Namoun (2012), Hierarchical Clustering with the Dotplots algorithm (Goldberg & Helfman, 2010) and eMINE scanpath algorithm (Eraslan et al., 2014).Shortest Common Supersequence.One of these techniques is the Shortest Common Supersequence (SCS) technique (Räihä, 2010).According to Räihä (2010), the sequence P can be a supersequence of the sequences S1 and S2 if the deletion of zero or more characters from P can provide S1 and S2.When this technique is repeatedly applied to the scanpaths in Table 1, it provides the scanpath shown in Example 1.
Example 1
The common scanpath of the Shortest Common Supersequence Technique for the scanpaths in Table 1 C As can be clearly seen from the common scanpath, this technique has considerable weaknesses.In particular, it provides a quite longer scanpath compared to the individual scanpaths.For example, the average length of the individual scanpaths in Table 1 is equal to 19.9 (Standard Deviation: 10.61) but the common scanpath for those scanpaths consists of 63 visual elements including repetitions.In contrast to the reductionism, this technique provides an unnecessarily complicated result.Furthermore, the common scanpath is not supported by the majority.For instance, it includes the visual element F four times but this visual element is only included by the third scanpath three times and fourth scanpath only once.Neither the durations of fixations nor the positions of visual elements on web pages are used by the SCS technique.
Multiple
Sequence Alignment Technique.Hembrooke et al. (2006) propose to use the multiple sequence alignment technique to identify an average scanpath for multiple users.In other words, they suggest to align repeatedly a scanpath with another scanpath in the list of scanpaths until a single scanpath is left in the list that represents their average scanpath.However, the technique is not described in depth and they have not evaluated this technique with any subsequent study yet.
When two scanpaths are aligned, their shared visual elements can be lost because of their positions in the scanpaths.For example, two scanpaths are aligned in Table 3.Although the first scanpath starts with the element D and the second scanpath has the element D in the second position, the element D is lost in the result of the alignment.Therefore, this technique becomes reductionist because of the alignment process.The durations of fixations and the positions of visual elements on web pages are not taken into consideration here.
Position-based Weighted Models.Sutcliffe and Namoun (2012) use a position-based weighted model to investigate where users focus in very early phases of their searches on web pages.They firstly segment web pages by using a 3x3 grid-layout segmentation, and then find the corresponding segments of the first three fixations of users on the web pages.They then give one point to the first segments, 0.5 points to the second segments and 0.2 points to the third segments.After this, they calculate the total point for each segment and sort them by the total points in descending order.
Table 10
The position-based weighted model of Sutcliffe & Namoun (2012) is applied to the scanpaths in Table 1 Visual Elements The first three visual elements in the scanpaths When the position-based weighted model of Sutcliffe and Namoun (2012) is applied to the scanpaths in Table 1 (see Table 10), the initially visited visual elements on the HCW Travel web pages are identified as follows: C (7.2 points), A (4 points), B (2.7 points), D (2.7 points), E (0.4 points).This model only concentrates on very early phases of Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages searching on web pages.Moreover, there cannot be any repetition in the common path but users can fixate the same visual element more than once.As this model only focusses on the first three visual elements in individual scanpaths and none of the visual elements are excluded, the reductionism is not applicable here.Holsanova et al. (2006) applies a similar approach to analyse reading paths and reading priorities on newspaper spreads.They firstly divide a newspaper spread into its AoIs and then rank them based on the first visits of the AoIs by users (Holmqvist et al., 2011).
The HCW Travel web page has seven visual elements.Thus, when the position-based weighted model of Holsanova et al. (2006) is applied to the scanpaths in Table 1 by giving 7 points to the firstly visited visual elements and no point to the non-visited visual elements, the sequence of the visual elements for all the scanpaths is identified as follows: CDABEFG.Table 1 shows the points for each visual element in each scanpath.Although the same AoI can be visited several times by users, the repetitions are not taken into consideration by this approach.Besides this, some AoIs may not attract users but none of the AoIs is excluded in their model.Therefore, the reductionism is not also applicable for this model.
Table 11
The position-based weighted model of Holsanova et al. (2006) is applied to the scanpaths in Both of the position-based models of Sutcliffe and Namoun (2012) and Holsanova et al. (2006) do not consider the durations of fixations and the positions of visual elements on visual stimuli.
Hierarchical Clustering with the Dotplots algorithm.The Dotplots algorithm is also suggested by Goldberg and Helfman (2010) for clustering multiple scanpaths hierarchically to identify their common scanpath.The algorithm was originally developed for the purpose of comparing two biological sequences (Krusche & Tiskin, 2010).Figure 6 illustrates how this algorithm works with the seventh and ninth scanpaths in Table 1 as an example (Eraslan et al., 2013).As can be seen from this example, it uses a two-dimensional matrix.One scanpath is written horizontally (S7) and another one is written vertically (S9).When the same visual elements are matched, their intersections are marked with dots.The dots are then used to find the longest straight line as a shared scanpath.As shown in Figure 6, BCCDCDDED, which is represented by a solid line, can be found as a shared scanpath of the seventh and ninth scanpaths in Table 1 Figure 6.Merging the seventh and ninth scanpaths in Table 1 with Dotplots algorithm (from Eraslan et al. (2013)) .
To hierarchically cluster multiple scanpaths with the Dotplots algorithm, the two most similar scanpaths are selected from the list of scanpaths by using the Dotplots algorithm and then the selected scanpaths are merged.Next, the merged scanpath is added to the list of scanpaths and then the selected two scanpaths are removed.This process is repeated until only one scanpath is left in the list that represents the common scanpath.In order to merge two scanpaths, they suggested two different ways: (1) Identifying a shared scanpath of two similar scanpaths by using the Dotplots algorithm (2) Assigning one of the two similar scanpaths to the merged scanpath.The second way is related to the selection of one of the individual scanpaths as a common scanpath that is a debatable idea as users might follow different paths to complete their tasks (see Figure 9).
Figure 7 shows how the scanpaths in Table 1 are hierarchically clustered with the standard Dotplots algorithm by using the first way of merging.It is also used by Albanesi et al. (2011) and they call the result a dominant path.
As can be seen from Figure 7, only visual element C is identified as a common scanpath for the scanpaths in Table 1 with this hierarchical clustering.It is mainly caused by the Dotplots algorithm.It can be recognised from Figure 6 that illustrates how the Dotplots algorithm finds the shared scanpath of two scanpaths.Although the dashed line can provide a longer shared scanpath in comparison to the solid line, it cannot be detected because of the disconnections.Hence, this algorithm makes the hierarchical clustering reductionist at the end.Besides, neither the durations of fixations nor the positions of visual elements on web pages are used by this approach to identify a common scanpath.
Journal of Eye Movement Research 9(1):2, 1-19 Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages technique is then applied to these two scanpaths to find their common scanpath (Chiang, 2009).After that, the chosen two scanpaths are removed from the list and then their common scanpath is added to the list.This process is repeated until there is a single scanpath in the list.The single scanpath is then abstracted to provide the common scanpath.When this algorithm is applied to the scanpaths in Table 1, it provides CDED as a common scanpath (see Figure 8) (Yesilada et al., 2013).eMINE scanpath algorithm tries to address the reductionist problem of the Dotplots algorithm by using the String-edit algorithm and the LCS technique together instead.However, it still uses a hierarchical clus-tering and that means some visual elements can be lost at the intermediate levels.Because of this reason, eM-INE scanpath algorithm is still likely to produce very short common scanpaths which are not useful for further processing of web pages.Assume that the individual scanpaths S6: DCDCABEDCD, S8: CACDECECD and S10: CACDADCBECDCB are available (see Table 1).First of all, the individual scanpaths S8: CACDE-CECD and S10: CACDADCBECDCB are merged as S (8,10): CACDCECD.When S6: DCDCABEDCD is merged with S (8,10): CACDCECD, CDCECD is identified as a common scanpath.As can be seen from this example, although the visual element A is shared by the three individual scanpaths, it is not included in the common scanpath.Similar to other techniques to identify a common scanpath for multiple scanpaths (see Table 2), eMINE scanpath algorithm does not consider the durations of fixations and the positions of visual elements on web pages.
Discussion
To support researchers in identifying salient web page features, eye tracking software products typically provide heat maps showing those parts of web pages which are mostly fixated by users8 .However, these maps are not designed to illustrate user scanpaths.These products also allow the visualisation of scanpaths along with gaze plots (see an example in Figure 1).Visualisations based on gaze plots are simple individual scanpaths displayed together.These have a limited benefit in evaluating a website in terms of generalisability.When there are multiple scanpaths, these plots become useless because it is difficult to distinguish them (see an example in Figure 9).While there are other visualisation techniques (Räihä et al., 2005), these also become complicated to analyse as the number of users increase.This article analyses the techniques which can be used to compare and correlate multiple user scanpaths.For instance, the techniques of the similarity/dissimilarity calculation group can be used for comparing scanpaths of two different user groups to investigate whether they follow different paths to complete a particular task (Eraslan & Yesilada, 2015).Moreover, the techniques of the transition probability calculation group can be used for investigating the efficiency of the arrangements of elements (Ehmke & Wilson, 2007).Furthermore, the techniques of the pattern detection and the common scanpath identification groups can be applied to user scanpaths and then the results can be used for re-engineering web pages to allow a direct access to firstly visited visual elements (Yesilada et al., 2013;Akpınar & Yes ¸ilada, 2015).
While all methodologies have a pros and cons (see Table 2), it is worth discussing some of the more notable limitations, along with suggestions for their mitigation.
Pre-processing: Eye tracking data typically consist of a large number of fixations, however, some of the fixations may not be meaningful.For example, involuntary eye movements may occur due to the oculomotor system (Cornsweet, 1956).Since scanpaths are correlated with visual elements of web pages by using fixations, meaningless fixations should be eliminated from the eye tracking data to reduce the variance.For example, our analysis showed that eyePatterns analysis tool cannot discover the pattern EDED in the scanpath BCECDCDCDEDECDC because of the element C.However, the element might be present due to a meaningless fixation.Therefore, eye tracking data should be pre-processed to ensure that meaningless fixations are excluded for improving the quality of the data.The key is identifying 'meaningless' fixations in a well found manner.
In the literature, there are researchers who remove the fixations if their durations are below a particular threshold.For example, Rämä and Baccino (2010) eliminated the fixations with a duration less than 100 milliseconds from their studies.However, different approaches exist for a duration that is needed to extract information from a dis-play (Rayner, Smith, Malcolm, & Henderson, 2009;Gl öckner & Herbold, 2011).In particular, Rayner et al. (2009) suggest that users require at least 150 milliseconds for each fixation to process a display normally.However such generalisations can be a problem because web pages can differ in their degrees of complexity.Therefore, the duration needed to extract information can be different from one page to another.The duration can also be affected by individual factors, such as gender (Pan et al., 2004).When a pre-defined threshold is used for eliminating meaningless fixations, eye tracking data can be biased in some way.Instead of using a predefined threshold, a new value can be determined for each page by analysing the data.In particular, researchers can benefit from analysing user fixations on target areas to identify the minimum duration that is needed to achieve the target.
Cognitive Processing: It is widely accepted that fixation duration is related to the depth of processing and the ease or difficulty of information processing (Velichkovsky et al., 2002;Follet et al., 2011).To take cognitive processing into account, fixation durations should be carefully considered.However, the majority of the scanpath analysis techniques do not consider fixation durations (see As also mentioned above, there are researchers who eliminate fixations based on a particular duration, even though they might have some information content.Researchers should also give their attention to fixation durations while they are analysing scanpaths.In particular, they should determine how much time is typically needed to complete the task that they want to ask their users.When a particular user completes the task in an unexpected duration, the user's data should be analysed to investigate the reasons. Reductionist Approach: Our analysis showed that scanpath analysis techniques tend to be reductionist while discovering patterns and identifying common scanpaths.In other words, the common scanpaths/patterns are likely to be unacceptably short which is not helpful for understanding users' behaviours on web pages.In particular, the common scanpath/pattern may not include the visual element shared by all individual scanpaths and/or the visual element included by the majority of the scanpaths.For example, the common scanpath identified by eMINE scanpath algorithm for the individual scanpaths DCDCABEDCD, CACDECECD and CACDADCBECDCB does not include the element A even though it is included in all of the individual scanpaths (See the details in eMINE Scanpath Algorithm section).A technique with a reductionist approach may also identify no common scanpath/pattern or a common scanpath/pattern with a single element.Since a single element does not illustrate a sequence, it is not helpful for understanding sequential behaviours of users on web pages.This problem can be addressed by taking the following suggestions into consideration.
1.The commonly visited visual elements should be included in the common scanpaths/patterns.Hence, researchers should firstly identify these elements and ensure that these visual elements are included in the common scanpaths/patterns.
2. The firstly visited visual elements should be located at the initial positions of common scanpaths/patterns.For instance, if the visual element C is firstly visited by the majority of the users, it should be located at the beginning of the common scanpath/pattern.
3. Small deviations should be allowed from strict sequentiality in some cases.In particular, there can be some visual elements that are fixated by all users but in a slightly different order.Re-searchers should ensure that these visual elements are also included in the common scanpaths/patterns.
Even though this article focuses on the web, the scanpath analysis techniques have also been used in different domains.For example, Hejmady and Narayanan (2012) applied the SPAM algorithm to identify visual attention patterns of programmers when they debug programs with an Integrated Development Environment (IDE).Another example from Hejmady and Narayanan (2012) who used a position-based model to analyse entry points and reading paths of readers on newspaper spreads.As the techniques revisited can be applied to all static visual stimuli, researchers from different domains can also benefit from this article.
In order to analyse and compare the scanpath analysis techniques, we used the eye tracking data of ten users.Even though the dataset is small, it is useful to illustrate the pros and cons of the techniques.The techniques can also be analysed and compared with a larger dataset in the future.However, when the sample size increases, the variations are also likely to increase.Therefore, the techniques may experience some problems to deal with these variations, especially the techniques that try to detect patterns or identify common scanpaths.In particular, they may not able to provide any result because of the variations.
Finally, in this article, we unfortunately could not apply some of the techniques to the dataset.For example, the implementation of the T-Pattern Detection technique is not publicly available (Magnusson, 2000).We believe that the implementations of the scanpath analysis techniques should be available for research/testing purposes to support eye tracking research.
Conclusions
Scanpaths correlated with visual elements of web pages can be analysed by using different techniques.Each of these techniques has its strengths and weaknesses and the researchers should pick those which are the most appropriate for the task at hand.While this article combines and revisits these techniques, and investigates their strengths and weaknesses by evaluating them with a third-party eye tracking dataset, all possible situations cannot be tested (see the Methodology section).This article also classifies the scanpath analysis techniques according to their goals as shown in Table 2, and by so doing, allows researchers to focus directly on the techniques that are suitable for their scanpath analysis on web pages.The main concluding remarks are listed below.
1.The String-edit algorithm is useful and straightforward to determine the similarity between a pair of scanpaths as a percentage (Underwood et al., 2008).However, when researchers pay attention to the distances between visual elements on web pages, they should create a substitution cost matrix based on the distances and then integrate the matrix into the String-edit algorithm (Takeuchi & Habuchi, 2007).
2. When researchers want to investigate transition probabilities between visual elements of web pages, they should consider a transition matrix as it clearly illustrates the transition probabilities (West et al., 2006).
3. eyePatterns analysis tool is publicly available and it helps researchers to search for a particular pattern within given scanpaths by allowing some gaps between the visual elements within the pattern (West et al., 2006).
4. When researchers want to detect repetitive patterns within multiple scanpaths, they should use the T-Pattern Detection technique that provides a number of different parameters for them to configure according to their goals (Magnusson, 2000).However, the implementation of the T-Pattern Detection technique is a commercial product.
5. If an oversimplification can be a problem for researchers they are identifying patterns and common scanpaths for multiple scanpaths, they should avoid using the techniques with a very reductionist approach.
6.It is widely accepted that fixation durations have a relationship with the depth of processing and the ease or difficulty of information processing (Velichkovsky et al., 2002;Follet et al., 2011).Therefore, when researchers want to consider cognitive processing, they should pick the techniques that use fixation durations based on their goals.For example, they should use ScanMatch technique to compare a pair of scanpaths (Cristino et al., 2010).
To make the ideal use of available eye tracking data, researchers should select an appropriate technique for their studies.In order to do so, they should be aware of the strengths and weaknesses of the alternatives and this article aims to support that.
Figure 1 .
Figure 1.An example of a user scanpath on the HCW Travel web page which is segmented into its visual elements -This web page was used for the review of scanpath analysis techniques.
: Substitution, +/-: Insertion/Deletion, =: None] stimuli in the context of Engineering and Civil War by using the String-edit algorithm.
Figure 2 .
Figure 2. A grid-layout segmentation with ScanMatch technique of Cristino et al. (2010) where each element is represented with one upper-case letter and one lower-case letter
Figure 3 .
Figure 3.An example part of the tree visualisation of eSee-Track analysis tool
Figure 5 .
Figure 5.The T-Pattern ABCDE that occurs in two behaviour sequences three times (from Mast & Burmester (2011)) As a result of an iterative process, each T-Pattern can be combined with another event type or T-Pattern to create a longer T-Pattern (see an example in Figure 5)(Magnusson, 2000;Mast & Burmester, 2011).A T-Pattern with n components can be represented as follows:X 1 [d1, d2] 1 X 2 [d1, d2] 2 ... X i [d1, d2] i X i+1 ... X nwhere [d1, d2] represents the critical interval(Magnusson, 2000).This technique uses the significance level parameter while generating T-Patterns(Magnusson, 2000).This parameter is related with critical intervals and it influences the number of event types in T-Patterns(Magnusson, 2000).When the significance level decreases, less and shorter patterns are detected(Magnusson, 2000).The T-Patterns can also be filtered by using various criteria such as the minimum pattern length, the minimum number of occurrences of the pattern(Magnusson, 2000).The T-Pattern Detection technique has many different parameters, and the detected patterns can be affected based on the adjustments of these parameters.As a consequence, the subjectivity level of the results can be a problem.By using strict values, the technique can also become reductionist, especially with the Fast Critical Intervals.As illustrated in Figure4, the pattern AB may not be detected as a T-Pattern because of the Fast Critical interval.Likewise to the majority of the scanpath analysis techniques (see Table2), the T-
Figure 7 .
Figure 7.The hierarchical clustering of the scanpaths in Table 1 with the standard Dotplots algorithm
Table 1
Individual scanpaths of ten users on the HCW Travel web page in terms of its visual elements
Table 6
Takeuchi & Habuchi (2007)erated for the HCW Travel web page by using the Euclidean Distance that is suggested byTakeuchi & Habuchi (2007) investigated the differences between expert and novice users while they were 7 DOI 10.16910/jemr.9.1.2ISSN 1995-8692 This article is licensed under a Creative Commons Attribution 4.0 International license.
Table 8
Searching for the exact pattern ED in the scanpaths in Table1by using eyePatterns analysis tool
Table 9
Discovering patterns with the length four in the scanpaths in Table1by using eyePatterns analysis tool
Table 1
Table 2).For ex-Eraslan, S., Yesilada, Y., and Harper, S. (2016) Eye Tracking Scanpath Analysis Techniques on Web Pages ample, our analysis showed that eMINE scanpath algorithm provides CDED as a common scanpath for the scanpaths in Table 1 but it does not illustrate which element has the longest time. | 2019-01-02T05:46:00.639Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "a641ba11fc3705434cb1ed71c500372a4b5f00f5",
"oa_license": "CCBY",
"oa_url": "https://bop.unibe.ch/JEMR/article/download/2430/3624",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a641ba11fc3705434cb1ed71c500372a4b5f00f5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254944004 | pes2o/s2orc | v3-fos-license | It’s a Group Thing: How Voters go to the Polls Together
Across European Parliament, local and general elections in Denmark between half and three quarters of voters in households with multiple voters cast their vote within a minute of another household member. This finding, revealed using data from a time-stamped voter panel covering more than two million Danish voters, establishes that many families visit the polling station together. The result are replicated using survey data from Denmark, the UK and a range of other countries, indicating that voting together is a widespread phenomenon, supporting the characterization of voting as a social act. For the first time our analysis reveals that acquiring a potential voting partner increases turnout, whilst losing one decreases turnout.
Introduction
Do people who live together vote together, and does that matter for turnout? One of the most persistent suggestions in the literature on electoral behavior in general, and voter turnout in particular, is that people are both similar to and influenced by their social intimates. As William Glaser (1959) and others concluded more than half a century ago, the turnout behavior of married couples is strikingly similar (see also, for instance Anderson 1943). Later studies have confirmed this pattern, often with a focus on households or married couples converging in turnout behavior (Wolfinger and Rosenstone 1980;Stoker and Jennings 1995). This suggests that household context is crucial in understanding individual turnout behavior (Cutts and Fieldhouse 2009;Bhatti et al. 2018). Whilst considerable research has pointed to the role of information and discussion (Huckfeldt and Sprague 1995), social norms (Knack 1992), and inter-personal mobilization (Rosenstone and Hansen 1993), a potentially important but rather less-explored explanation for this pattern is that citizens accompany each other to the polling stations.
Many established democracies have witnessed decreasing rates in turnout over the last few decades (Franklin 2004;International IDEA 2015;Vowles 2018). At the same time household structures have changed dramatically as families become simultaneously more complex and individualized. There are more single households than ever before, increasing divorce rates, fewer marriages, more non-marital childbearing, and more cohabitation outside marriage; the family has become a much less stable and more complex unit (Carlson and Meyer 2014;Tach 2015). This makes it more important than ever to study the extent to which individuals that live together vote together, and whether changes in family structure thus have consequences for turnout.
In this paper we provide the first objective analysis (i.e. not-self-reported data) of whether household members actually vote together. We examine how many voters vote with someone else in their own household, and who votes together and alone. While there has been much speculation about this, the empirical evidence is still scarce and has mainly been based on a small number of surveys (Fieldhouse and Cutts 2012). We also show that some types of individuals are more likely to vote with others than the population at large.
Second, we present evidence that addresses the question of the causal effect of voting together on turnout. Recently increasing attention has been given to the household as the most important unit for influencing the decision to turnout (e.g. Cutts and Fieldhouse 2009). While it now seems likely that household processes play a causally significant role in voting, it is less clear exactly what those processes are. One promising explanation could be that individuals affect each other because they are directly confronted with each other' decision to vote on Election Day and accompany each other to the polls. This has been referred to as the companion effect (Fieldhouse and Cutts 2012). Because voting together necessitates turning out to vote, it is very difficult to demonstrate whether the availability of a companion increases the probability of voting. Even large panel studies (e.g. the British Election Study) lack sufficient data on changes in availability of companions to assess the causal effect. While we do not claim to overcome all identification issues, in this study we exploit a unique administrative data set linking individual voters across three elections to provide new evidence of a companion effect.
We apply our analyses across three types of elections: a municipal election, a European parliament election and a general election in Denmark using a large, unique administrative dataset with the exact timing of the vote for more than two million individuals in the three elections merged with precise residential information. Additionally, in order to assess the external validity of our analyses we draw upon similar survey items in the British and Danish Election Studies to understand to what extent the level of voting together differs between the UK and Denmark, two very different cultural and electoral contexts.
In the following section we briefly summarize the theoretical rationale for and empirical evidence of voting together from the literature, and set out two hypotheses.
Political Behavior (2020) 42:1-34 The analysis is divided into three parts. In the first we examine the extent to which citizens vote together and thereafter we describe which types of individuals are more likely to vote in company. In the last part of the paper we present evidence for the causal influence of voting together.
Theory, Evidence and Hypotheses
At least since Anderson (1943) it been recognized in the turnout literature that people vote together with the people they live with. This idea was originally based on observations and survey research that show that married couples vote more often than unmarried individuals (Wolfinger and Rosenstone 1980;Jennings 1995, 2005). Research also shows that the intra-household turnout correlation is very high and increasing attention has been given to the household as the most important unit for influencing the decision to turnout (Cutts and Fieldhouse 2009;Bhatti et al. 2018). Individuals sharing residency are remarkably congruent in their voting behaviors and when one individual in a household is mobilized to vote, between 30 and 60% of the mobilization effect spills-over to other household members (Nickerson 2008;Sinclair et al. 2012;Bhatti et al. 2017). Fieldhouse and Cutts (2012) have coined the phrase the companion effect which points to the likely importance of voting in tandem. While all these studies suggest some kind of effect of household members on each other with respect to turnout, the correspondence in turnout has been mainly based on correlations in turnout at the household level rather than in systematic evidence of whether people actually vote at the same time. More recently, in order to address this, questions measuring voting together have been fielded in a number of election studies around the world. However, while these data (reported below) provide some descriptive evidence supporting the companion effect hypothesis that voting is a social act and not something that is carried out in isolation, these findings would be strengthened if corroborated with objective data. In addition, more evidence is required to establish causality. It remains to be demonstrated that the initial findings hold with objective data and that voting together has some causal influence on the decision to vote, for example by reducing the cost of voting (Nickerson 2008). Ultimately, if the decision to vote is a collective or joint decision rather than an individual one, then voting might be considered a collective act.
Before setting out our hypotheses regarding the impact of voting together, we briefly lay out our expectations concerning the number and type of people who vote together. The few existing surveys of voting together suggest that it is a widespread phenomenon and we expect our objective data to confirm this. We also might expect that the incidental benefits gained from voting together-for example the pleasure of taking a walk to the polling station together-are largely opportunistic and are therefore likely to be more important for married couples or electors living in larger households. As household size increases, the number of potential voting partners increases as well. This is important both in terms of the number of other voters who may invoke social norms of voting and the extent to which citizens confront the decision of other household members about whether to vote or stay home. Because social intimacy and the influence of the household might be expected to increase with marriage so should the relevance of voting with someone else.
3
The descriptive question of how many and who vote together is a necessary first step in the establishing the importance of the phenomenon, but it does not directly answer the bigger question of whether voting together increases turnout. There are good theoretical reasons to think this might be the case, not least because voting is a social act (Franklin 2004;Fieldhouse and Cutts 2016;. The dual process model of behavior suggests two main sources of inter-personal influence: norms and information (Hogg and Vaughan 1995). First, individuals within a given household may influence each other because they are directly confronted with each other's decision to vote on Election Day. The social norm of voting likely plays an important role here. Social pressure from peers has been shown to have an important exogenous influence on voter turnout (Green and Gerber 2010) and may lead to the internalization of the norm of voting which can be manifested as civic duty (Coleman 1990). Moreover, if the norms of social intimates are particularly persuasive, social intimacy in families is likely to mean that household correlations in turnout result from the level of civic duty within the household (Fieldhouse and Cutts 2016). Second, because of higher rates of discussion within families than other social relationships (Huckfeldt and Sprague 1995;Zuckerman et al. 2007), household members are likely to influence each other's electoral participation by exchanging information, for example about the election, candidates or even how to vote (e.g. removing some of the anxiety of voting for the first time). In addition to informational and normative influence, we propose a third type of mechanism, the companion effect: by attending the polling station together, voters may reduce the cost of going to the polling station to cast the vote and increase the peripheral benefits such as enjoying the social aspect of the experience (Fieldhouse and Cutts 2016).
As discussed above, while (given the right data) it is possible to establish whether individuals vote together, it is more challenging to empirically verify that this has an effect on turnout, because voting together necessitates voting. However, what we can test is whether getting a potential voting partner is consequential for turnout. This leads us to the first hypothesis: H1 Acquiring a potential voting companion leads to increased turnout probability.
Just like acquiring a potential voting partner changes the availability of a potential companion, so does losing one. For example, Hobbs et al. (2014) show how widowhood results in a long term drop in turnout. When examining loss of a partner we can even be more precise because we can empirically distinguish between actual voting partners (i.e. household members who voted together in the last election) and potential but not actual voting partners (i.e. household members who did not vote together in the last elections). If voting together matters, we could expect losing a voting partner would have greater adverse effects on voting than losing a household member in general. We therefore hypothesize: H2 Losing a voting companion leads to a greater fall in turnout probability than losing a non-companion. 1
Electoral Context
This article draws on data from three recent Danish Elections. The choice of case is driven by the availability of unique data (described below). Our data are collected across three types of elections: municipal, European parliament and general elections. The three types of election provide variation in the prevailing level of turnout across which we can measure the companion effect. No voter registration is needed in Danish elections. Eligible individuals automatically get registered on the voter list of his/her local polling station and polling cards are mailed to the individual's official address before Election Day. Elections are non-compulsory.
The 2013 municipal elections 2 took place simultaneously in all of the 98 Danish municipalities on November 19, 2013. More than 30% of the Danish GDP is administered at the local level (municipalities and regions) and municipalities take care of most of the core functions in the welfare state such as child care, schools, elderly care, the social area, libraries and some parts of the health sector. At municipal elections each municipality is a constituency where between nine and 55 mandates are distributed proportionally among multiple parties. The municipal councils are elected for a fixed 4 year period. In 2013 turnout was 71.9% which was slightly above the historical average of about 70%.
The 2014 European parliament elections took place on May 25, 2014. 3 In the elections the entire country is a single constituency and the 13 Danish parliamentarians are elected for a 5 year period on open party lists using proportional representation. In the 2014 election turnout was 56.3%.
The 2015 national parliament election was held on 18 June 2015. Denmark has a parliamentarian political system where elections to the national parliament are the most salient (only matched occasionally with referendums regarding EU membership). The 179 parliamentarians (of which four are elected directly in the Faroe Islands and Greenland) are elected for 4 years or until the prime minister calls for an election. The election system ensures national proportional representation though the representatives of the parties are elected in ten grand districts. Turnout in the 2015 was 85.9% and this close to the historical average.
Data: Time Stamped and Validated Voter Files Across Three Elections
In this paper we exploit a unique feature of Danish digital voter lists: time stamps which provide us objective information about the exact timing of individuals' arrival at the polling desk to obtain their ballot. After the 2013 municipal election, the 2014 European election and the 2015 national election we collected data on actual turnout from the municipalities who administered the elections. The municipalities use two types of systems for recording whether an individual voted: manual lists or digital lists. In the polling stations with digital lists individuals are registered digitally when they arrive at the polling station to obtain their ballot (which is always a paper ballot) utilizing a barcode on the polling card. One crucial detail is that when the barcode is scanned, the voter is not only digitally marked on the voter list; the time of the scan is also registered. This information can be used to investigate whether individuals vote together as the time data can anonymously be linked to residential information and family information etc. in Statistics Denmark (Bhatti and Hansen 2010;Bhatti et al. 2014a, b;. As we are interested in whether people vote together we focus only on polling stations with digital lists. The digital lists were administered by the municipalities and there is therefore no individual level self-selection into the study, limiting the risks of response bias. Furthermore, as registration is automatic in Denmark the voter lists include all eligible individuals no matter their potential interest in voting as they did not have to take active steps to become registered. If a municipality participated we had access to information about turnout and the timing of the vote in minutes for all eligible individuals at the relevant polling stations. Crucially, after the election, we could match this information in anonymous form to detailed socio-demographical information from the official statistics bureau, Statistics Denmark. We obtained an address identifier allowing us to identify which individuals share a household. We had access to vote and address information for approximately 2.4 million individuals for the 2013 municipal elections, 2.3 million individuals for the 2014 European parliament elections, and around 2.5 million individuals for the 2015 General Election. The slight variation in sample size for each election is due to the differences in eligibility rules (the total number of eligible voters was between 4.14 and 4.42 million in the elections), and small variations in the participating municipalities, which arise mainly because more voter files become digital over time.
When utilizing the register data we focus exclusively on voting together with other eligible household members because this can be objectively identified by address of residence. As noted above, the household is theoretically the most interesting unit with respect to joint voting, having been identified as the most influential context for political socialization (Berelson et al. 1954;Glaser 1959;Zuckerman et al. 2007) and empirically the most important context for inter-personal influence on turnout (Nickerson 2008;Cutts and Fieldhouse 2009;Sinclair et al. 2012;Bhatti et al. 2017). It is also worth noting that the polling card has the address of the assigned the polling station and that assignment to polling stations are based on residential address such that household members are always assigned to the same polling station. These unique data allow us to examine the phenomena of voting together. Furthermore, the sample sizes and the fact that the electoral data can be linked to administrative data allows us to gain further leverage on the question regarding causal significance of voting together on turnout.
The unit of analysis in all analyses in this study is the individual voter. However, in order to calculate whether each individual voted with others, we used the individual address identifier in the register data to create all possible intra-household dyads. For each dyad, as an indicator of voting together, we identified whether the two individuals obtained the ballot at the voting station within one minute of each other. 4 After dyads voting together were identified, we then deduced whether each individual was part of at least one dyad voting together. If so, they were classified as having voted with someone else from their household. If not, the person was considered as voting alone. 5 Thus, in the analyses we have one record per elector and the main variable of interest concerns whether she voted with others in her household. We supplement the register data with surveys from two Danish election studies and the British Election Study which are based on subjective re-collections of the voting act but are able to capture voting together between non-cohabiting individuals.
How Many People Vote Together?
Before testing our hypotheses, we examine the extent to which individuals vote together with their household members in Denmark (see Table 1) and in comparison with other countries where survey data are available ( Table 2). The data in Table 1 uses the Danish register data which contain the timing of the vote from time stamps on the voter list. The individuals in the samples are divided depending on whether they did not vote, voted by post (in Denmark postal voting is a form of early voting usually cast at the city hall or citizen service centers), voted at the polling station alone or voted at the polling station with another household member.
Between 29 and 35% of all eligible individuals voted at the polling station on Election Day with someone from their household in each of the three elections. If we look only at voters, 41-51% voted within a minute of someone else in their household. This is, to our knowledge, the first evidence on the level of voting together from largescale administrative data and it shows that voting together is a very common phenomenon. The 41-51% share of voters voting with others is remarkable when taking into account that some voters live alone and therefore by definition cannot vote with others in their household, while others vote by post and therefore cannot vote together on Election Day. If we restrict the sample to only individuals living in 2 + sized households, the share of voters voting with others increase to 56% (2015 general election), 60% (2013 municipal elections) and 69% (2014 European Parliament election).
Looking across the three types of elections suggests that, despite large variations in salience between the three elections, differences are relatively modest. However, one interesting pattern appears: while the absolute percentage of individuals voting together is higher when turnout is high (29, 33 and 35% across the three elections ordered by overall turnout), the relative proportion voting together is higher in low salience elections (51, 46 and 41% respectively). This indicates that individuals voting together may be more resilient to factors that reduce turnout, perhaps because voting together decreases the costs and increase the peripheral benefits of voting. Together, these findings provide prima facie evidence of the companion effect.
It is relevant to ask if the findings from Denmark also hold elsewhere. In order to do so we have fielded identical survey items in the Danish Election Study and the British Election Study across different types of elections in the two countries, aiming to estimate the level of voting together in Denmark and the UK. 6 We also utilize Table 1 Voting mode for individuals in three Danish elections (percent) The number of early voters was 4.5, 5.3, and 8.7% respectively in the three elections. These voters are registered as a separate category in the analysis. Usually early votes are casted at the city hall or citizen service centers (Fieldhouse and Cutts 2016). The Danish survey data stems from the Municipality Election Study (Kjaer 2017) and from the Danish National Election Study (Hansen and Stubager 2016) for the 2013 municipals elections and the 2015 national parliament elections. The municipal election study was conducted as a combination of web and phone interviews and had a total response rate of 44.6% yielding 4.528 respondents. The national parliament election study was carried out as a combination or web interviews and personal interviews and with 2.001 respondents it obtained a response rate of 48.8%. To inquire into whether respondents voted with others we in both surveys asked respondents "Thinking back to Election Day, which of the following best describes how you cast your ballot". The categories in the municipal survey were "I visited the pooling station on my own", "I visited the polling station with another person who did NOT vote", "I visited the polling station with another person who voted", "I do not want to answer". In the national survey the categories were similar-however, "alone" was used 1 3 Political Behavior (2020) 42:1-34 previously reported findings from Italy, Canada, Scotland and Wales (Fieldhouse and Cutts 2012). 7 Table 2 replicates the findings for voting together from Table 1 based on election studies from the six different locations. As the percent voting together of all electors is likely to be inflated by the fact that there is substantially under-reporting of non-voting in the surveys, we focus on the distribution of voting modes among voters.
Overall across all these countries between roughly one-third and two-thirds of those reporting having voted indicated that they voted together with someone at the polling station. There is some variation across countries with Denmark having twice the proportion of voters voting together compared to the UK (slightly less than 60% compared to 31%). A large part of this difference seems to be driven by the use of postal voting in the UK (postal voting is rare-approximately 5%-in Denmark and was therefore not given as a response option in the surveys). About 43% of British voters voting at a polling station voted with another person compared to approximately 58% in the Danish surveys. The difference between Britain and Denmark highlights a potentially negative aspect of postal voting. Postal voting may diminish some of the social aspects of voting by making voting a more individualized activity which could be more vulnerable to decline (Burden et al. 2014). Figures for Scotland, Canada and Wales resemble the UK while the numbers for Italy are close to the Danish ones. The findings show that across different countries and across different types of elections, voting together is a widespread phenomenon and is not specific to a single country or type of election.
The percentage of polling station voters who vote with others in Denmark is about 58% in the survey data compared to 45-49% in the register data from the same elections. The difference is most likely to be due to the fact that the register data instead of the word "on my own" and we used "person or persons" instead of "person". In the national election surveys individuals were given the option of providing multiple answers. However, only 31 individuals did this. In the analysis we only utilize their primary answer.
Footnote 6 (continued) 1 3 only identifies individuals voting together who share a household. Slightly more than 10% of those voting together in the survey data report voting only with someone outside the household. Furthermore, we cannot dismiss the possibility of overreporting of voting together in the survey data. However, overall the survey data and register data is quite consistent.
Who Votes with Others?
Having established that voting together is a frequently occurring and general phenomenon we dig deeper into what types of individuals are most likely to vote with others. We expect that voting together is largely driven by opportunity, i.e. household size (no. of eligible individuals in the household) and marriage. To confirm this, for each election we calculate the share of each mode of voting (not voting, postal voting, voting alone and voting together), by the variables of interest. Figure 1 shows the rates of voting together by household size for each of the three Danish elections. For all elections household size is strongly related to voting together. This is not surprising insofar as voting together (by definition) can only occur when the household size is greater than one. Looking at multi-person households, the descriptive relationship between size and voting together is modest. In absolute terms the share voting together declines slightly with household size in all elections-for instance, in the 2013 municipal elections 46% voted with others in two elector households while the corresponding number was 32% in large households (more than four electors). However, this is mainly because there are more non-voters in large households. When disregarding non-voters, the relative share of lone voters and individuals voting together is virtually constant across household size-for instance, in the 2013 municipal elections, the ratio of individuals voting together and voting alone is approximately 1.7:1 for household sizes of both two and greater-than-four. A possible explanation of the similar patterns in household sizes greater than two is that the opportunities of voting together in larger households may be offset by weaker ties among household members. Figure 2 shows an equivalent chart for marital status.
The charts confirm that married couples vote together more frequently than the rest of the population. The percentage of non-married individuals voting together is 15-22% across the elections while the corresponding numbers for married individuals is 45-51%.
To provide more insight into the differences between groups when controlling for a range of demographic predictors of turnout, we estimate multinomial logistic regression models for each election. The dependent variable is voting mode (voting alone, not voting, postal voting and voting together). The reference category is voting alone. We include a range of usual suspects as controls: age, age squared, agecubed, 8 gender, educational level (5 categories), income, children in the household, residential stability and ethnicity (3 categories). We restrict the models to include household sizes greater than one since single-individual households by definition cannot vote together. In Fig. 3 we show the predicted probabilities (averaged over observed values) for voting together compared to the corresponding probabilities for voting alone. Note that confidence intervals are plotted but are not visible due to the The results are consistent across elections. Even when taking household size into account a married person has a higher likelihood of voting together compared to voting alone (see also the positive coefficients for married individuals in the multinomial logit models in Appendix Tables 3, 4, 5). Unsurprisingly, the differences between married and unmarried individuals are smaller than when household size and other factors are not taken into account, but the share voting together is still around 10% points higher for married couples.
The results for other variables are also interesting. In the European elections, highly educated voters were more likely to vote together in absolute terms, but relative to voting alone, the proportion is lower than other groups across all elections (see also the negative coefficients for the high education groups in Appendix Tables 3, 4, 5). This may be because those with lower levels of education are more likely to drop out of voting when they have nobody to vote with. In other words having a voting companion may be especially important when an individual does not otherwise have the resources to vote. The corollary of this is that highly educated citizens are relatively less likely to vote together (compared to alone) possibly because their resources or norms of voting make them less reliant on the social benefits of voting, and are more likely to vote even if that means voting alone. Another potential explanation is that, insofar as the highly educated on average work longer work hours (Deding and Filges 2009), they might find it more difficult to coordinate going to polling station with a family member. Non-Western immigrants are less likely to vote together than ethnic Danes, while older people are more likely than the young. In further analyses (Appendix Tables 3, 4, 5) we have tested the robustness of the results to splitting the models on household sizes instead of controlling for household size. The results are generally consistent across models. Appendix Table 6 replicates the findings with survey data from the UK and Denmark (see the appendix). Again, the replication with survey data provides similar results as the objective data and across the two different contexts UK and Denmark.
The Relationship Between Opportunity of Voting Together and the Likelihood of Voting
We have now documented that individuals indeed vote together and that the tendency of voting together is, at least, partly is driven by opportunity and social intimacy. This is interesting insofar as it informs us about how people vote. Whilst the descriptive patterns are indicative of a connection between the opportunity for voting together and actual turnout, the causal effect-whether voting together affects turnout-still remains unproven. In other words, does the availability of a voting partner cause an increased probability of voting? Recent studies have found that households are perhaps the most important unit for inter-personal mobilization (Nickerson 2008;Sinclair et al. 2012;Bhatti et al. 2017). This could, at least partly be due to the possibility of accompanying each other to the polls, as suggested by the companion effect.
We noted above that it is difficult to demonstrate whether voting together bears any causal significance on turnout as voting together can (by definition) only occur among voters. In other words how do we know if a non-voter would have voted had they had the option to vote in company? We can go some way towards measuring the opportunity to vote together with network survey data by examining the impact of inter-personal mobilization-that is whether a respondent is asked by a discussant to vote (Rosenstone and Hansen 1993). In the 2014 European Parliament wave of the British Election Study Internet Panel (Fieldhouse et al. 2015) this was asked in a discussant ego-network module alongside whether each discussant accompanied the respondent to vote. These data show a high degree of correspondence between being asked to vote by a discussion partner and voting together: 74% of discussants who asked a respondent to vote actually accompanied the discussant to the polling station. By contrast less than 1% of those voting together did so without having been asked. Nevertheless, another 17% of those asked also voted in company, but not with the discussant who invited them. This demonstrates an imperfect correspondence between conventional measures of inter-personal mobilization (being asked to vote) and voting together. Moreover, this still does not tell us whether each respondent would have voted had they never been asked or had the opportunity to vote together never arisen. This absence of a reliable counterfactual (only having data on voting together for voters) makes it difficult to assess the causal importance of the companion effect in cross-sectional data. To get a better understanding of this we examine whether individuals who gain the opportunity to vote with a companion have a higher propensity to vote than individuals who lose the opportunity. More specifically we conduct two analyses. First, we test whether acquiring a potential voting companion, from one election to the next, leads to increased turnout probability (H1). Second, we test whether losing a voting companion leads to a greater fall in turnout probability than losing a non-companion (H2).
To examine the consequences of acquiring a potential voting companion, we use data from individuals included in our register data about whom we also have information on turnout (though not timing) from the 2009 Danish municipal elections. This means that we can create an individual level two-wave panel of the same type of elections, with a sufficiently long time-lag for a substantial number of voters to Political Behavior (2020) 42:1-34 have changed their living circumstances. Specifically, we examine whether individuals who previously lived alone in 2009 but lived at least one other elector in 2013 saw an increase their probability of voting. We also test the reverse of this-whether losing a potential voting partner (from a multi-elector household to a single elector household) 9 leads to a decrease in probability of voting. The sample for this analysis is all individuals in our data who were eligible in the 2009 and 2013 municipal elections (for 2009 we have access to data from 44 of the 98 Danish municipalities). A challenge for this analysis is the possibility that unobserved characteristics of citizens are correlated both with changes in household composition and changes in turnout behavior. By stratifying our analysis by previous turnout behavior we adopt the equivalent of a change score model which provides some protection against the effects of unobserved time-constant variables (Allison 1990;Berrington et al. 2006). We cannot eliminate this potential threat entirely, but we mitigate it as much as possible by matching on a range of pre-treatment characteristics using coarsened exact matching or CEM (Blackwell et al. 2009;Iacus et al. 2012). Subsequently, we conduct standard regression on the matched sample with appropriate weights to take into account differences in the relative number of treatment and control observations between strata.
In our models we split the sample depending on whether individuals voted or abstained at the outset in 2009 to allow for asymmetrical effects on previous voters and non-voters. This allows us to take into account that change over time could be dependent on the initial level of turnout. As the sample is stratified by turnout at the outset, the dependent variable is simply turnout in 2013 (0-abstained or 1-voted).
The key independent variable is the change in the household type of the individual. We consider four different treatment statuses, one for losing a potential voting partner, two for no change (either no household partner in both periods or a partner in both periods), and another for gaining a partner. In our analyses we compare having no partner in both periods with gaining a partner, and having a partner in both periods with losing a partner. We match exactly on pre-treatment age (one category for each year of age), education (5 categories), civic status (married vs. non-married), income (6 categories), and residential stability (9 categories). The combination of these variables provides us with more than 40,000 potential strata. After the matching we conduct a standard logistic regression of change between 2009 and 2013 on a range of variables. Figure 4 depicts the results graphically, while Appendix Table 7 of the appendix show the results numerically (see Appendix Table 8 of the appendix for a robustness test without matching which yields similar conclusions).
The results in Fig. 4 and Appendix Table 7 indicate that a change in the availability of a potential voting partner is highly consequential for individual turnout. The results are especially consistent for those who abstained in 2009 (the bottom half of Fig. 4 and model 1-2 in Appendix Table 7). For those who were in a single elector household in 2009 gaining a potential partner resulted in an increase in turnout of about 10% points compared to those who did not gain a partner (bottom left of Fig. 4 and model 1 of Appendix Table 7). Among those who did have a potential voting partner in 2009, losing a potential partner resulted in a 5% point decrease in turnout (bottom right of Fig. 4 and model 2 of Appendix Table 7). For those who voted at the outset, losing a partner resulted in a 6% point drop in turnout (top right of Fig. 4 and model 4 of Appendix Table 7), but there is almost no effect of gaining one (top left of Fig. 4 and model 3 of Appendix Table 7). This might be because individuals voting at the outset were very likely to vote regardless of gaining a partner. Moreover, as well as inducing turnout, gaining a potential voting partner might disrupt previous voting patterns. For example, inevitably some subjects (including those that voted in 2009) gained a non-voting partner, which may have a demobilizing effect (Partheymüller and Schmitt-Beck 2012). In further analysis we tested this potential demobilizing effect by splitting the sample by whether those gaining a partner were joined by someone who was a voter or a non-voter in the previous election. 10 The analysis (reported in Appendix Fig. 6) shows that the effect of gaining a partner is positive for non-voters moving in with either a voter or a non-voter, although the positive effect is larger for those who gained a voting partner. Moreover, even prior-voters who gained a non-voting partner saw no discernible drop in turnout. Together these findings suggest that the positive impact of gaining the opportunity for voting together (the companion effect) outweighs any potential negative effects of anti-voting social norms. To sum-up, the results in Fig. 4 provide support for our first hypothesis: acquiring a potential voting companion leads to an increased turnout probability, whilst losing one has the opposite effect.
We noted above that, despite the panel design, it is possible that observed correlation between changes in turnout behavior and household status could be the result of a third factor driving both. An alternative way of approaching the question, which overcomes this, is to examine whether individuals who voted together in one election behave differently in subsequent elections to those who lived together but did not vote together (H2). More specifically, we can look at whether individuals who lose a voting partner are more adversely affected than individuals who split from a person they did not vote with. By focusing only on households that broke up we avoid the problem of unobserved variables that correlate with both household break up and changing turnout To test this, we focus our analysis on households with two eligible-electors who voted in the 2013 election, which subsequently split up in the 6 months period between the 2013 and the 2014 elections. In other words, they did not live together at 2014 election, but did so in the 2013 election. In contrast to the previous analysis this has the advantage that, we are able to test directly the effect of the loss a voting companion, as opposed to any other household partner. We do not make any restrictions on their new household, i.e. they can be single or live with someone new, but we include an indicator of whether they lived with someone else in 2014. We look at the 2013 and the 2014 elections as 2013 is the first election in which time stamps are available (i.e. we do not have time stamps in 2009).
We estimate a logit model where the dependent variable is turnout in the 2014 European parliament election. We restrict the sample to those who voted in 2013 and had a partner who also voted to maximize the comparability of the 'treatment' and 'control' since, in both groups, the subject lived with an elector who voted in the previous election. The only difference between the groups is that in the companion 'treatment' group the pair attended the polling station together. Thus the key independent variable is whether the individual voted with the partner in 2013. Our expectation is that voting together in 2013 would have a negative effect on the change in turnout between the elections, as this would imply the loss of a voting partner as opposed to the loss of a partner who voted separately. In other words, if having a voting companion is important, then losing a voting companion should be more detrimental to turnout than losing a non-companion. This also allows us to separate the effect of merely living with a voter (which might be associated with increased normative influence or increased flows of information) from the effect of the opportunity to vote together. In other words, if the effects are just as large for the loss of a 'non-companion' co-habitee, this would suggest it is not the companion effect at play (and vice versa). The results are presented in Fig. 5. As in Fig. 4 we base the model on a matched sample created by CEM on pre-treatment variables and we apply appropriate weights in the regression. Note that the elections are only 6 months apart and therefore we are not able to control rigorously for time-varying variables which are mainly annual in the Danish registers. In Table 9 of the appendix we show the results numerically and in Appendix Table 10 we present an alternative model with no prior matching which yields similar results.
In line with the expectations we find a negative estimate of more than 8% points for individuals who split from a voting companion compared to someone who voted separately (see Fig. 5 or Appendix Table 9 ). In other words the negative effect of splitting is markedly higher for individuals who in the first election voted together than for individuals who split from a person they did not vote with-even though focus only on voters. We cannot completely exclude the possibility that these differences are partly driven by relevant unobserved differences between splitting couples that had previously voted together and alone. In other words there may be some unobserved factor that is correlated with both the transient component of turnout and whether a voting companion or non-companion was lost (e.g. splitting from a spouse compared to a flat-mate). However, given the protection offered by the panel design, the restriction of the sample, and the CEM, along with the substantial size of the effect, the results provide strong evidence in favor of the companion effect as an important mechanism driving turnout.
Conclusion
It has long been argued that voting is a social phenomenon, subject to the effects of inter-personal influence through shared information, indirect mobilization and social norms. More recently it has been argued that citizens frequently go the polls together and that this has consequences for turnout. However, the phenomenon has been difficult to examine empirically as questions about voting partners are not routinely asked in surveys and both self-reported turnout of self and political discussants may be subject to response bias (through social desirability) and in addition there is often under-representation of non-voters in surveys. Furthermore, the extent to which voting together matters for turnout is difficult to study as individuals, by definition, can only vote together when they vote. The counterfactual-"would those individuals have voted in the absence of a voting partner?" cannot be answered even with (cross-sectional) network survey data. In this study we contribute to the literature by tackling this question using a longitudinal large-scale validated register dataset with the exact timing of the vote for more than two million individuals in three elections.
Voting with others is remarkably widespread. About 29-35% of all eligible Danes voted with another voter at the polling stations in the three elections under investigation, and if we restrict ourselves to voters only, the number is even higher-between 41 and 51% of voters vote with other household members at the polling station. We also showed that voting together also occurs frequently in Britain but less so than in Denmark, largely due to the frequent use of postal voting in the UK. Moreover, as hypothesized, voting together seems largely to be somewhat driven by opportunity and closeness in households-e.g. married individuals vote more frequently with others than non-married individuals. Also, high propensity voting groups seem to vote less with others relative to voting alone -perhaps because they are more resilient to the lack of a potential voting partner.
Investigating whether voting together has a causal effect on turnout is challenging. We leveraged the question by using this unique dataset to look at the consequences of obtaining a potential voting partner and losing an actual voting partner. What we found, consistent with our hypotheses, is that individuals who gained a potential voting partner between two elections had an increased probability of voting. Likewise, individuals who split from a voting partner saw a greater drop in the probability of voting than individuals who split from a household member who had not been a voting partner. These results provide support for the argument that voting is a social act and more specifically that the opportunity to go to the polling station to vote in the company of another voter (the companion effect) is not simply a function of normative influence. While there are challenges to causal inference our study adds to the existing literature by demonstrating both the extent and impact of voting together. It is worth noting that the effect sizes we have found are large in comparison to typical effect sizes in get-out-the-vote interventions. Our findings have important implications for understanding the decline in turnout in advanced democracies across the world. If companion effects or voting together encourage voting then part of this decline is likely to be attributable to changes in family and household structure. With the steady increase in single person households since the 1960s the opportunities for voting together have declined for many electors. And for some this means not voting at all rather than voting alone. their comments. We would also like to thank the anonymous reviewers and the editor for their constructive comments.
Data
The survey data use in the article are available for replication including the code for replication of all analyses in the article. Replication files are available at the Political Behavior Dataverse: https ://datav erse.harva rd.edu/datas et.xhtml ?persi stent Id=doi:10.7910/DVN/SVGV7 6. The government administrative data based are stored on secured servers at Statistics Denmark. Due to security and privacy reasons, the data cannot be made publicly available on the Internet.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix
See Appendix Figure 6 and Appendix Tables 3, 4 , 5, 6, 7, 8, 9, 10. Unstandardized logit coefficients. Standard errors in parentheses clustered by household ID. *p < 0.05,**p < 0.01,***p < 0.001. 1% winsorzing applied to the income variable Political Behavior (2020) 42:1-34 Unstandardized logit coefficients. Standard errors in parentheses clustered by household ID. *p < 0.05, **p < 0.01, ***p < 0.001. 1% winsorzing applied to the income variable Political Behavior (2020) 42:1-34 Unstandardized logit coefficients. Standard errors in parentheses. *p < 0.05, **p < 0.01, ***p < 0.001. "−" denotes that the variable in question was not available in the survey. HH income is in 10,000 £ in UK and 100,000 DKK in Denmark. Weights applied to the Danish data. The sample is restricted to 2 + sized households for UK2013, UK2015 and DK2013 consistent with Appendix Tables 3, 4, 5. We did not make this restriction for DK2015 due to the unavailability of household size information Political Behavior (2020) 42:1-34 Table 7 Logistic regression of 2013 municipal election turnout based on exactly matched samples of individuals and divided on 2009 turnout Unstandardized logit coefficients. Standard errors clustered by 2009-households in parentheses. *p < 0.05, **p < 0.01, ***p < 0.001. ∆ Married is scaled from − 1 to 1 where − 1 is getting divorced between the two elections, 0 is unchanged status and 1 is being married. ∆ Education which is scaled from − 4 to 4, reflecting 5 categories. 1% winsorizing applied to the ∆ income variable. ∆ Residential stability is the difference in the number of 1000 days at the current address at the elections. CEM is conducted on pre-treatment variables: on pre-treatment age (one category for each year of age), education (5 categories), civic status (married vs. non-married), income (6 categories), and residential stability (9 categories Unstandardized logit coefficients. Standard errors clustered by households in parentheses. *p < 0.05, **p < 0.01, ***p < 0.001. ∆ Married is scaled from − 1 to 1 where − 1 is getting divorced between the two elections, 0 is unchanged status and 1 is being married. ∆ Education which is scaled from − 4 to 4, reflecting 5 categories. 1% winsorizing applied to the ∆ income variable. ∆ Residential stability is the difference in the number of 1000 days at the current address at the elections Political Behavior (2020) 42:1-34 Table 9 Logistic regression of 2014 European parliament election turnout, 2013 voters who lost voting partners, based on exactly matched sample of individuals Unstandardized logit coefficients. Standard errors clustered by 2013-households in parentheses. *p < 0.0 5, **p < 0.01, ***p < 0.001. ∆ Married is scaled from − 1 to 1 where − 1 is getting divorced between the two elections, 0 is unchanged status and 1 is being married. ∆ Residential stability is the difference in the number of 1000 days at the current address at the elections. CEM is conducted on pre-treatment variables: on pre-treatment age (one category for each year of age), education (5 categories), civic status (married vs. non-married), income (6 categories), and residential stability (9 categories). Unstandardized logit coefficients. Standard errors clustered by 2013-households households in parentheses. *p < 0.05, **p < 0.01, ***p < 0.001. ∆ Married is scaled from − 1 to 1 where − 1 is getting divorced between the two elections, 0 is unchanged status and 1 is being married. ∆ Residential stability is the difference in the number of 1000 days at the current address at the elections. Pre-treatment variables are age, education, civic status, income, and residential stability | 2022-12-22T15:06:17.449Z | 2018-07-27T00:00:00.000 | {
"year": 2018,
"sha1": "9a56bf6edcd0de367d204bb5f0d772f0b9ec1b75",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11109-018-9484-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "9a56bf6edcd0de367d204bb5f0d772f0b9ec1b75",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
270039675 | pes2o/s2orc | v3-fos-license | The Impact of SARS-CoV-2 Pandemic on Antibiotic Prescriptions and Resistance in a University Hospital from Romania
This paper aimed to evaluate the effects of the COVID-19 pandemic on prescription rates and antibiotic resistance in a university hospital. A retrospective study was conducted on the medical records of patients admitted to the Bihor Emergency Clinical County Hospital in Romania in 2019 (pre-pandemic) and 2021 (during the pandemic period). We evaluated the antibiotic consumption index (ACI) and susceptibility rates. The overall percentage of antibiotic prescribing increased in 2021, while the total number of patients decreased. Genito-urinary, digestive, respiratory infections, heart diseases and wounds were the most common conditions for antibiotic prescriptions, but the number of them decreased in 2021. There was a decrease in the proportion of antibiotics from the Watch and Reserve class and an increase in the proportion of antibiotics from the Access class. Antibiotic use has been reduced despite an increase in the number of patients, with a high consumption in the Watch group in the ICU wards. By contrast, surgical wards had the highest rate of antibiotic prescriptions, but a decrease in the number of patients. The patients who were administered antibiotics were hospitalized for diagnoses other than COVID-19. Almost all prescribed antibiotics displayed decreasing sensitivity rates. The number of isolated ESKAPE pathogens, except for Staphylococcus aureus methicillin-resistant strains, were increased. Strategies to control antibiotic prescriptions and the spread of resistant pathogens should be improved.
Introduction
Antibiotics are prescribed in medical practice for prophylactic and therapeutic purposes for infections.The improper use (wrong dose or overdose) of antibiotics facilitates the development of antibiotic resistance in microorganisms, allergies, dermatological, hematological, renal and neurologic adverse events and causes side effects-mostly gastrointestinal: e.g., nausea, diarrhea, C. difficile infections [1][2][3][4].Antimicrobial resistance (AMR) is a worldwide concern and public health problem that is exacerbated by the overuse of antibiotics, with a large economic burden on healthcare systems.Many reports have highlighted an impending rate of 10 million yearly deaths due to AMR by 2050, amounting to one death every three seconds.Otherwise, AMR was categorized as the top ten global public health threat by the World Health Organization in 2019, necessitating immediate interventions [5,6].For these reasons, it is very important to know and apply rational drug use into clinical practice.
Rational antibiotic use is defined as taking antibiotics in accordance with the clinical needs of the patient, at appropriate doses, for sufficient time, at the lowest cost to themselves and society.Information concerning the indications and doses of antibiotics and prophylaxis is updated frequently and medical staff should follow the current guidelines, prescribe according to results from exploratory tests and attend education programs.Although the literature contains studies evaluating the patterns of prescription, knowledge and attitudes of medical staff with regards to antibiotic prescription, there are not enough studies in this field [7,8].
AMR has particularly grown in the last two decades, becoming an urgent threat to global health [9,10].In hospitals and wards with a high rate of antimicrobial prescription such as intensive-care units and surgical wards, infections caused by difficult-to-treat bacteria are increasingly associated with elevated mortality and increasing hospital costs [11][12][13].The European Centre for Disease Prevention and Control (ECDC) reported more than 670,000 infections and 33,000 deaths due to multi-resistant bacteria in the course of 2019 [14].Antimicrobial stewardship (AMS) must improve the outcomes, quality and safety of patient care at the same time as preventing or controlling the spread of AMR.Ultimately, this consists of responsible antibiotic use and prescribing effective antibiotics to treat infections.According to many national action plans, the target is to reduce antimicrobial use, both for humans and animals, and this could start with a reduction in community antibiotic use [15,16].To achieve this goal, better understanding of current antibiotic prescribing pattern is needed.However, the implementation of guidelines in primary care and hospitals has not been satisfactory in many countries [17].Analyses of patterns of antibiotic prescribing involve the renewal of short-term antibiotic prescriptions for acute issues that exist beyond a single course of treatment, general practices and hospital prescribing habits, as well as additional infections that occur over a certain period.The reasons for antibiotic prescribing may not always be well documented, with up to half of antibiotic prescriptions being unrelated to any specific medical diagnosis recorded [18].
Coronavirus disease (COVID-19) is an infectious respiratory disease caused by the SARS-CoV-2 virus.Both COVID-19 and bacterial pneumonia share similar clinical features, but the COVID-19 pandemic has challenged the implementation of antimicrobial stewardship programs and the use of antimicrobials in clinical practice.On the other hand, the impact of the COVID-19 pandemic on antimicrobial resistance is still largely unknown; however, there is evidence that antimicrobial prescription habits have been profoundly affected [19].According to the Center for Disease Control and Prevention (CDC), the COVID-19 pandemic is responsible for a 15% increase in AMR and related deaths in hospitals in 2020 [20].Therefore, exploring data concerning the impact of the COVID-19 pandemic on antibiotic prescribing and AMS is an essential step for improving and updating existing policies.
The pandemic has triggered an unexpected crisis in health care systems for healthcare workers, patients and hospitals.Different bacterial infections occur in COVID-19 pandemic patients-particularly in patients with severe disease, with the majority being hospitalacquired co-infections [21].The most frequent causal agents of hospital-acquired infections are a group of six virulent and resistant pathogens called ESKAPE pathogens, referring to Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter species.The list of pathogens can be completed with Escherichia coli because of its importance as the main etiological agent of many infections and the reason for the prescription of many antibiotics.Enterococcus sp.includes Enterococcus faecium and Enterococcus faecalis, gram-positive commensals commonly found in the gut.Enterococci are associated with hospital-acquired infections including urinary and catheterassociated urinary tract infections, surgical site infections and bloodstream infections.According to the CDC, about 30% of all healthcare-associated enterococcal infections are resistant to vancomycin, resulting in reduced treatment options, and these infections were estimated to cause 5,400 deaths in 2017 in the United States [22].
Although these pathogens are frequently isolated from environments such as surface water, wastewater and soil, they are also involved in hospital-associated infections.In hospitalized critically ill COVID-19 patients, complications involving a secondary infection usually involve multidrug-resistant (MDR) bacteria, at a rate ranging from 30-50%, with the majority being respiratory and blood stream infections occurring a week after admission [23].
The aim of this paper was to investigate whether the prescription of antibiotics for specific infections caused by ESCAPE pathogens and antibiotic resistance in our hospital has changed during the COVID-19 pandemic, and to identify new trends in prescriptions that may be a consequence of the COVID-19 pandemic.The primary objective of the study was to evaluate the consumption of prescribed antibiotics and the clinical conditions associated with this, and the second was to compare the microbiological profile of the hospital before and during the COVID-19 pandemic.
Results
A total of 36,076 and 23,903 patients from 2019 and 2021, respectively, were admitted to the hospital.The overall number of antibiotic prescriptions was for 19,502 patients in 2019 and for 14,660 patients in 2021, respectively, showing an increase in 2021 compared to 2019 (61.33% versus 54.06%, p < 0.00001) while the total number of patients decreased.The general demographic characteristics of the study population are summarized in Table 1.The age of patients admitted in 2019 and 2021 ranged from 5 to 98 years, with an average of around 54, but the difference in the overall age category was not statistically significant (Figure 1).The age category of 60-80 encompassed the greatest number of admissions in 2019 and 2021.However, the age of the patients was younger in the case of patients hospitalized in 2021.The outcome documented a doubling of the mortality rate in 2021 (4.63% versu 8.04%, Pearson's Chi-squared test, p < 0.05), but this interpretation must be performed carefully in terms of causes of deaths.We explored conditions for which antibiotics ar prescribed, and we classified them as follows: Genito-urinary, digestive, respiratory infec tions, heart diseases and wounds were the most common conditions for antibiotic pre scriptions, but the number of them decreased in 2021.Conversely, the number of surgica complications treated with antibiotics and other viral etiologies increased in 2021 (Tabl 2).A total number of 911 patients were diagnosed and hospitalized for COVID-19, and half of them (584 patients) were treated with antibiotics in 2021.Out of the 584 patients 475 presented with COVID-19 and bacterial infections.If we take into account these 47 COVID-19 patients treated with antibiotics and refer to the total number of patients in 2021, it turns out that the patients who were administered antibiotics were hospitalized for diagnoses other than COVID-19.However, a significant number of patients, regardles The outcome documented a doubling of the mortality rate in 2021 (4.63% versus 8.04%, Pearson's Chi-squared test, p < 0.05), but this interpretation must be performed carefully in terms of causes of deaths.We explored conditions for which antibiotics are prescribed, and we classified them as follows: Genito-urinary, digestive, respiratory infections, heart diseases and wounds were the most common conditions for antibiotic prescriptions, but the number of them decreased in 2021.Conversely, the number of surgical complications treated with antibiotics and other viral etiologies increased in 2021 (Table 2).A total number of 911 patients were diagnosed and hospitalized for COVID-19, and half of them (584 patients) were treated with antibiotics in 2021.Out of the 584 patients, 475 presented with COVID-19 and bacterial infections.If we take into account these 475 COVID-19 patients treated with antibiotics and refer to the total number of patients in 2021, it turns out that the patients who were administered antibiotics were hospitalized for diagnoses other than COVID-19.However, a significant number of patients, regardless of the evaluated year, were prescribed antibiotics without having a specific reason (Table 2, other conditions without infectious disease), but the number of them decreased.
Most of the patients who were prescribed antibiotics were admitted to the surgery and medical wards, but the number of patients decreased in 2021.Instead, the number of patients who were prescribed antibiotics after being admitted to the ICU in 2019 was the lowest; this number later increased, surpassing that of patients admitted to medical wards in 2021 (Table 3).Table 4 documents that surgical wards had the highest rate of antibiotic prescriptions, as expressed as ACI, followed by medical wards with a significant increase in 2021, while the consumption of antibiotics in the ICU decreased slightly.The prescription of antibiotics increased five times in the surgical wards and three times in medical wards, but statistical significance was obtained only for surgical wards and the ICU (p < 0.00001, Table 4).Most of the antibiotics prescribed showed increases in prescribed doses in 2021, but statistical significance was obtained only for ceftriaxone, cefixime, cefuroxime, metronidazole, ampicillin, amikacin, gentamicin, rifaximin and clindamycin.On the other hand, teicoplanin and benzylpenicillin were prescribed less often in 2021, while cefaclor and cefazolin were no longer prescribed at all.
The global evaluation of the consumption of antibiotics according to the AWaRe classes showed a high consumption of antibiotics in the Watch group, followed by the Access group.Worth noting is the decrease in the proportion of antibiotics from the Watch and Reserve class and the increase in the proportion of antibiotics from the Access class (p < 0.00001) when comparing 2021 to 2019 (Table 5).
Without reaching statistical power, significant increases in the doses of antibiotics prescribed were also recorded for those in the Reserve group, but they remained the lowest prescribed doses (Table 8).
In relation to the consumption of antibiotics we explored the sensitivity rate of them; amikacin, piperacillin/tazobactam, ceftazidime/avibactam, ceftazidime, cefepime, cefotaxime, cefixime, trimethoprim/sulfamethoxazole, imipenem, meropenem, ertapenem, benzylpenicillin, azithromycin, ciprofloxacin and levofloxacin showed significant decreasing sensitivity rates.Conversely, sensitivity rates were increased for fosfomycin, vancomycin, clindamycin, colistin and ofloxacin (Table 10).We explored the isolated ESKAPE pathogens and their resistance phenotype.There was an increase in the number of isolated Acinetobacter baumannii, Pseudomonas aeruginosa and Klebsiella pneumoniae strains, as well as those expressing a resistance phenotypeparticularly to carbapenems (p < 0.00001) during 2021.The number of isolated E. coli and Enterobacter sp.strains decreased slightly, but the number of ESBL and carbapenem-resistant increased significantly.There was a decrease in the number of isolated Staphylococcus aureus strains, as well as MSRA ones (Table 11).
Discussion
In this study, we used data from medical records and reports from some hospital departments to explore whether and how the COVID-19 pandemic has influenced antibiotic consumption and the microbiological profile.For this purpose, we evaluated 2019 as a pre-pandemic year and 2021 as a year during the pandemic.
The widespread transmission of SARS-CoV-2 infections has been a challenge for healthcare facilities, changing some practices, increasing the number of admissions and exerting pressure on personnel and medical facilities [24,25].In the current study an increasing trend was observed, both in terms of antibiotic consumption and the number of patients treated with antibiotics, although the number of patients decreased.Total antibiotic consumption expressed as ACI before the pandemic period was 1075.07 and the total use of antibiotics during the pandemic was 2548.57,representing a significant increase.Most studies showed increasing antibiotic prescriptions during the COVID-19 pandemic [26,27], but there are other studies describing a reduction in their prescription [28,29].Many factors may be considered to explain the increased prescription of antibiotics during the COVID-19 pandemic, such as a misunderstanding of how to treat these infections, inexperience, hospital overcrowding, the limited number of medical staff versus the number of patients, changes in the antimicrobial stewardship team's activity and a lack of initial therapeutic protocols.This emphasizes the importance of antimicrobial stewardship in controlling and optimizing antibiotic use in hospitals, including in emergencies and in situations like the COVID-19 pandemic [30].Age does not appear to impact antibiotic prescribing patterns significantly, but gender does.It seems that female patients are administered antibiotics more frequently regardless of the pandemic context [31].
Reported mortality rate represents a partial count of the total deaths from the COVID-19 pandemic, but the distribution and significance of other causes of death have changed because of many reasons (social, health politics, economic, behavioral).The reliability of reported deaths varies greatly between countries, locations and hospitals, and over time and it is difficult to frame COVID-19 as the main cause.When compared with the pre-pandemic period, the mortality rate of nearly 8.04% during the pandemic showed an increasing rate, as international statistics and other studies have shown [32][33][34].There could be many explanations for this, such as aggravated chronic conditions or presentation to a hospital during the complication phase with or without COVID-19 infection.
According to current guidelines, the prescription of antibiotics is recommended under specific clinical conditions (infections) or for prophylactic purposes.The significant clinical conditions for which increasing doses of antibiotics were administered were respiratory conditions, surgical complications, genito-urinary infections and wounds while digestive conditions were decreased.In our study, the number of patients who were administered prophylactic antibiotics increased, but the number of patients decreased.In addition, there was a significant number of patients who were prescribed antibiotics without having a specific reason, but this number decreased in 2021.The result is an excess of prescriptions and the need for intervention from the antibiotic stewardship committee, for regular updates of clinical practice guidelines-especially for empirical treatment antibiotics-the following of these guidelines and education [35].Stewardship program interventions must improve the quality of care through the improvement of prescription decisions in hospital settings.
A total of 911 patients with COVID-19 were hospitalized, and half were treated with antibiotics, presenting with bacterial complications (52.14%).These presented also with advanced age and comorbidities that required antibiotic treatment.The proportion of patients treated with antibiotics that had COVID-19 was 3.24%, which shows that antibiotics were mainly prescribed for conditions other than COVID-19.More than half of the patients with COVID-19 and those of older age admitted into the ICU were in critical condition or died.The number of patients who were administered antibiotics increased in the ICU in 2021, but the consumption of antibiotics in the ICU decreased in 2021 (p < 0.00001).This can be explained by the administration of antibiotics only to patients who had a documented bacterial infection, regardless of whether they were COVID-19 patients or not.By contrast, surgical wards had the highest rate of antibiotic prescriptions and a reduction in the number of patients, similar to other studies [36].This aspect can be explained by the doubling of the number of patients with surgical complications.Similar aspects have also been described in the medical ward, i.e., a reduction in the number of patients with a significant increase in the consumption of antibiotics, expressed as ACI.The explanations for this could be related to the patients or to prescribing habits.As the patients presented late, in the complications phase, the prescribing doctors applied the strongest treatments, not necessarily following the guidelines-including the guidelines for COVID-19 management.
The World Health Organization (WHO) has released the WHO AWaRe (Access, Watch, Reserve) classification of antibiotics in addition to the model list of essential medicines [37].They are very useful for guidance on the use of antibiotics, taking into consideration the risk of antimicrobial resistance development, and they could be used by many hospitals.The group of the Watch antibiotics have a higher potential for developing antimicrobial resistance and their use should be carefully monitored.Although the prescription of these antibiotics was shown to be higher in 2019, a decrease for ceftriaxone, cefuroxime, cefixime and rifaximin was still observed during COVID-19 in our study.Reserve group of antibiotics are last-option antibiotics that should only be used for the treatment of severe infections caused by multi-drug-resistant pathogens.In our study, we observed a decreased trend for the use of colistin, tigecycline, linezolid, ceftazidime/avibactam and imipenem/cilastatin/relebactam.Access antibiotics are antibiotics with fewer side-effects and a lower potential for the development of antimicrobial resistance, and they should be used for the empiric treatment of most common infections.In the present study, the trend for metronidazole, amikacin, ampicillin, gentamicin and clindamycin use during the COVID-19 pandemic increased, and this is a good direction to be continued.
A study reporting reduced hospitalizations due to airway respiratory tract infections during the COVID-19 pandemic led to a decrease in the consumption of antibiotics, especially for penicillins and beta-lactamase inhibitors [38].In our study, the consumption of cephalosporins, imidazoles, penicillins, rifamycins, lincosamides and glycopeptides, ex-pressed as ACI, increased in 2021 compared to 2019, and only two antibiotics showed decreases in 2021-benzylpenicillin and teicoplanin, as other studies have reported [39][40][41].Other studies reported a decrease in cephalosporin prescriptions during the COVID-19 pandemic, along with the prescription of macrolides, lincosamides and quinolones [42,43].
Despite insufficient statistical power, we observed an increase in quinolone prescriptions, contrary to many other studies showing a reduction in quinolones throughout the COVID-19 pandemic; the prescription of these antibiotics should be monitored and rationalized for specific indications for children and adults [39,44].These differences in results could be explained if fluoroquinolones are used as empirical therapies rather than for a single specific antimicrobial therapy against pathogenic germs [41].
From the imidazole class, only metronidazole was prescribed, showing the most significant increase in the prescription rate-practically 12 times more.Coronavirus disease 2019 involving the upper respiratory tract followed by severe pneumonia, respiratory distress and/or even death has rapidly emerged as a global pandemic.Many studies have found higher blood levels of some pro-inflammatory cytokines during this infection.In the same context, there are studies showing that metronidazole could decrease the levels of some cytokines-especially interleukin 8, 6, 1B, 12, 1α, tumor necrosis factor (TNF) α, interferon γ, as well as the levels of C-reactive protein (CRP) and neutrophil counts.An increased consumption of metronidazole has been recorded, prescribed more often for non-COVID patients [45].
ESKAPE pathogens are involved in the increases in morbidity and mortality related to antibacterial resistance.During COVID-19, co-infections-especially with Gram-negative ESKAPE bacterial and fungal were more frequent in patients with severe COVID-19 symptoms than in patients with milder symptoms [46]. A. baumannii, K. pneumoniae and Enterobacter spp., are the most prevalent strains in nosocomial pneumonia, complicating the management of COVID-19 patients who need to be ventilated in the intensive care unit (ICU), and all these pathogens were increased in number during 2021, including increases in carbapenem-resistant and ESBL pathogens.Among the patients with bacterial infections, the majority had Gram-negative infections and/or mixed infections with Gram-positive and Gram-negative pathogens, regardless of the evaluated year.Among the Gram-negative bacteria, K. pneumoniae was the predominant pathogen, followed by A. baumannii.In one study, it was reported that during COVID-19, P. aeruginosa was the main pathogen associated with ventilator-associated pneumonia in critical patients, but in our study, it represented only the third most common etiology after A. baumannii and K. pneumoniae.Carbapenems are considered the most appropriate agents to treat Gram-negative infections, but these strains displayed high levels of resistance to carbapenems in 2021.Due to these resistance mechanisms, polymyxins are considered the best option for the treatment of CR-A.baumannii and CR-P.aeruginosa infections [47].
Methicillin-resistant Staphylococcus aureus (MRSA) is one of the major pathogens responsible for bloodstream infections and multi-drug resistance (MDR), commonly associated with hospital-acquired infections.There are studies that have reported whether the prevalence rates of MRSA have decreased or not during COVID-19 pandemic, but variation between hospitals, population and geography are described [48,49].Our findings are in accordance with these studies, but the sensitivity rates of effective antibiotics against MRSA have increased (vancomycin, clindamycin, teicoplanin) and are less prescribed.
The main limitations of our study are as follows: First, we explored a single hospital and its prescription practices, the susceptibility rates of the antibiotics used and the resistant pathogen profile.Second, we were unable to examine the prescriptions case-by-case or explore each clinical situation or other factors involved in therapeutic decisions.Prolonged hospitalization and immunosuppression during COVID-19 exposed those patients to infectious complications with resistant pathogens and increased antibiotic prescriptions.
Materials and Methods
This retrospective study compared data concerning antibiotic use by analyzing the medical reports of patients admitted to the Bihor Emergency Clinical County Hospital, Romania between 1 January and 31 December of 2019 (before the COVID-19 pandemic) and 1 January-31 December of 2021 (during the COVID-19 pandemic year).This is a teaching multidisciplinary hospital with more than 500 beds and a large number of pathologies, grouped in this study into the Intensive Care Units (ICUs), surgical and medical wards.
Data was collected from the patients' electronic medical records.The study population included all inpatients registered in two cohorts corresponding to 2019 and 2021, respectively, who received antibiotics in different departments of the hospital.Age, gender and outcomes were extracted and analyzed.Data on antibiotic prescriptions were extracted from the pharmacy's reports.
The study of antibiotic prescriptions was performed using the Anatomical Therapeutic Chemical Classification System (ATC/DDD, 2016) developed by the WHO Collaborating Centre for Drug Statistics Methodology, ATC/DDD Index 2022 [50].First, antibiotic prescription patterns were expressed in grams, and then we calculated the antibiotic consumption index (ACI) using the following formula: ACI = (total dose of antibiotic (grams)/DDD × total patient days) × 100 for each antibiotic We classified antibiotics into classes and WHO AWaRe (Access, Watch, Reserve) [51].We considered genito-urinary, digestive and respiratory infections, heart diseases and wounds as the main diagnoses for which antibiotics were prescribed.
Antibiotic susceptibility testing was performed with a Vitek 2 system according to the recommendations of the European Committee on Antimicrobial Susceptibility Testing (EU-CAST) criteria [52].
Statistical analysis was performed using the R program (https://www.r-project.org/(accessed on 4 January 2024)), version 4.3.1.Descriptive statistics were used to summarize the baseline characteristics of the study population, including age and sex.To verify the statistical significance of the obtained results, the Chi-square test (χ2) and the t-test were used.The confidence interval was set at 95%, with a statistical significance threshold of 0.05.
We calculated antibiotic sensitivity rates by considering all isolated strains per year, and a minimum of 30 from each isolate by using WHONET 2023 software.For 2019, we performed this evaluation on 5987 strains and on 6212 strains in 2021.Results were expressed as %R, %I, %S and %R 95% confidence intervals.
Individual patients' written informed consent was obtained at admission.This study was approved by the Ethics Committee of the County Clinical Emergency Hospital of Oradea and is in full agreement with the World Medical Association Declaration of Helsinki.
Conclusions
The World Health Organization (WHO) has recognized that the inappropriate and irrational use of antibiotics has been followed by antibiotic resistance and that the COVID-19 pandemic has challenged the management of patients, antibiotic use and the surveillance of some critical categories of bacteria from the point of view of antibacterial resistance development.It is well known that the inappropriate use of antibiotics represents the major cause of AMR onset, and that this can be prevented.In addition, in cases of the weakening of immune system defenses occurring during viral or bacterial infections, as well as other immune diseases, antibiotic consumption and resistance trends must be controlled to anticipate subsequent changes.Our results show that patients who were administered antibiotics were hospitalized for diagnoses other than COVID-19.The escalation in antibiotic prescription among hospitalized patients and the increase in antimicrobial resistance at the same time during the COVID-19 pandemic had a local impact on antibiotic consumption and antimicrobial resistance rates in 2021.However, the isolation of resistant isolates in the hospital setting emphasizes the need to apply and update antimicrobial stewardship programs.Antibiotic prescriptions should follow current guidelines and there is a continuous need for education in the correct diagnosis and treatment of infectious diseases, as well as rational drug use.Continued research efforts and strategies to control the spread resistant pathogens are needed to address this public health threat.
Figure 1 .
Figure 1.Age of patients in 2019 and 2021.
Figure 1 .
Figure 1.Age of patients in 2019 and 2021.
Table 1 .
Distribution of the demographic characteristics of patients admitted before and during the COVID-19 pandemic.
Table 2 .
The distribution of clinical conditions (number of patients) for which antibiotics were pre scribed.
Table 2 .
The distribution of clinical conditions (number of patients) for which antibiotics were prescribed.
Table 3 .
Number of patients treated with antibiotics according to the type of ward.
Table 4 .
Antibiotic consumption by grouped departments expressed as Antibiotic Consumption Index (ACI).
Table 5 .
Antibiotics prescribed according to AWaRe classification, expressed as ACI, in 2019 and 2021.
Table 6 .
Access group of antibiotics, expressed as ACI, in 2019 and 2021.
Table 7 .
Watch group of antibiotics, expressed as ACI, in 2019 and 2021.
Table 8 .
Reserve group of antibiotics, expressed as ACI, in 2019 and 2021.
Table 9 .
Evaluation of antibiotic prescribed by class, expressed as ACI, in 2019 and 2021.
Table 10 .
Sensitivity rate of the tested antibiotics.
Table 11 .
Number of isolated ESKAPE pathogens and their resistance phenotype. | 2024-05-26T15:37:55.843Z | 2024-05-23T00:00:00.000 | {
"year": 2024,
"sha1": "ac89060b7f1b93d40ca900366002c3c48f8f8d88",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/13/6/477/pdf?version=1716459033",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a149ac4842ab62d2c48b3eb716cc5f192fbb56f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
236882530 | pes2o/s2orc | v3-fos-license | A glance at the chemodiversity of Ocimum species: Trends, implications, and strategies for the quality and yield improvement of essential oil
Ocimum species represent commercially important medicinal and aromatic plants. The essential oil biosynthesized by Ocimum species is enriched with specialized metabolites specifically, terpenoids and phenylpropanoids. Interestingly, various Ocimum species are known to exhibit diverse chemical profiles, and this chemical diversity has been at the center of many studies to identify commercially important chemotypes. Here, we present various chemotypes from the Ocimum species and emphasize trends, implications, and strategies for the quality and yield improvement of essential oil. Globally, many Ocimum species have been analyzed for their essential oil composition in over 50 countries. Asia represents the highest number of chemotypes, followed by Africa, South America, and Europe. Ocimum basilicum L. has been the most widespread and well-studied species, followed by O. gratissimum L., O. tenuiflorum L., O. canum Sims, O. americanum and O. kilimandscharicum Gürke. Moreover, various molecular reasons, benefits, adverse health effects and mechanisms behind this vast chemodiversity have been discussed. Different strategies of plant breeding, metabolic engineering, transgenic, and tissue-culture, along with anatomical modifications, are surveyed to enhance specific chemotypic profiles and essential oil yield in numerous Ocimum species. Consequently, chemical characterization of the essential oil obtained from Ocimum species has become indispensable for its proper utilization. The present chemodiversity knowledge from Ocimum species will help to exploit various applications in the industrial, agriculture, biopharmaceutical, and food sectors. Supplementary Information The online version contains supplementary material available at 10.1007/s11101-021-09767-z.
Introduction
Among the diverse specialized metabolites biosynthesized in the plant kingdom, volatile organic compounds (VOCs) constitute plant-derived essential oils. They are secreted and stored in different specialized structures, such as intra-cytoplasmic oil bodies, ducts and cavities, glandular trichomes, and osmophores (Jacobowitz and Weng 2020;Rehman et al. 2016).
Ocimum genus, which belongs to the Lamiaceae family, includes highly aromatic and essential oilbearing plants with a pantropical distribution Suddee et al. 2005). According to World Flora Online, 66 Ocimum species have been reported until now (http://www.worldfloraonline.org). However, only a few species, such as Ocimum basilicum Linnaeus (L.), O. gratissimum L., O. tenuiflorum L., O. canum Sims, O. americanum L. and O. kilimandscharicum Gürke have been predominantly valued for their phytopharmaceuticals, aroma and flavors. These Ocimum species are endowed with enormous phytochemical diversity. The essential oil of Ocimum species is a complex mixture of odoriferous VOCs. It has extensive applications in the culinary, cosmetics, medicinal, flavor, fragrance, perfumery, nutraceutical, and toiletry industries (Pandey et al. 2014;Singh et al. 2015). Different tissues of Ocimum species are utilized in fresh, dried, frozen form or distilled essential oil. The French, Greek, Italian, and Mexican cuisines include mainly fresh leaves of Ocimum species due to their unique aroma. Such fresh aromatic leaves are also suited as flavorings or spices in sauces, stews, salads, and decorations. It can be applied in other food preparations, such as meat, fish, butter, cheese, and beverages (Bown 2001;Meyers 2003;Piva et al. 2021), while essential oil is employed as a food preservative and flavoring agent (Li and Chang 2016). The nanocomposite film prepared from O. basilicum seed mucilage can be used for food packaging (Rohini et al. 2020). Further, the essential oil of O. basilicum has been applied to prepare edible coating and food packaging system to increase food shelf-life (Amor et al. 2021;Mohammadi et al. 2021). The various fragrant compounds from Ocimum species essential oil have found utility in personal care products like soaps, mouthwashes, perfumes, hair care, and dental products (Tucker and DeBaggio 2000). Over the years, Ocimum species have been traditionally exploited to treat various ailments in Indian Ayurveda and traditional African, Chinese and European medicine. Several species of the Ocimum genus possess multiple pharmacological properties, e.g., in vitro antimicrobial, antiviral, antimalarial activities and in vivo analgesic, anti-inflammatory, antidiarrhoeal, antidiabetic, anticancer, radiation protective, anti-hyperlipidemic activities, etc. (Ali et al. 2021;Pandey et al. 2014;Purushothaman et al. 2018;Santos et al. 2021;Singh and Chaudhuri 2018;Singh et al. 2015Singh et al. , 2016, whereas essential oil is valued in aromatherapy (Li and Chang 2016). On the other hand, silver and copper nanoparticles synthesized using aqueous leaf extract of O. americanum have shown therapeutic properties, including in vitro antibacterial, anticancer and catalytic properties, which can be used for photocatalytic dye degradation (Manikandan et al. 2021a, b). Recently, a molecular docking study showed that apigenin, oleanolic acid and ursolic acid from O. basilicum are potential inhibitors of chymotrypsin-like protease of severe acute respiratory syndrome coronavirus (SARS-CoV2) and could be effective in the treatment of coronavirus disease (COVID-19) (Matondo et al. 2021). Moreover, the hydrogel obtained from O. basilicum seeds paves the way in the biomedical field for targeted drug delivery and sustained drug release (Lodhi et al. 2020). Apart from this, O. basilicum leaf extract has been utilized for preparing mosquito repellent fabrics (Kantheti et al. 2020). Additionally, pesticidal activities like fungicidal, nematicidal, larvicidal, insecticidal, trypanocidal, etc., are exhibited by Ocimum species essential oil and their organic or aqueous extracts (Bhavya et al. 2021;Chowdhary et al. 2018;Singh et al. 2014). Furthermore, several Ocimum species have phytoremediation potential for the removal of toxic compounds, such as pesticides (Ramírez-Sandoval et al. 2011), organic dyes (Dada et al. 2020), crude oil (Choden et al. 2021) and heavy metals from soil (Lakshmanraj et al. 2009). Also, bioremediation of heavy metals like copper and chromium is facilitated by O. basilicum seeds (Gupte et al. 2012;Melo and D'Souza 2004). Ocimum basilicum seeds have been used as an effective coagulant for the treatment of textile and paper recycling waste water (Mosaddeghi et al. 2020;Shamsnejati et al. 2015).
The promotion of organic, natural, and green consumerism has led to an increased demand for plant-based products. Meanwhile, natural plant products have globally maintained their place in the market under competition from synthetic compounds. Subsequently, plant-derived essential oils are gaining ground despite the availability of synthetic substitutes of essential oils (Khan 2018). It is estimated that the market of Ocimum species essential oil will grow by 186.5 million USD from 2019 to 2023, with an 8% compound annual growth rate, and Europe will account for the largest market share (https://www. technavio.com/report/global-basil-essential-oilmarket-industry-analysis). Overall, with such a huge market potential, essential oil of Ocimum species is of great economic importance for the developing countries in terms of foreign exchange revenue.
Based on the occurrence of one or more major chemical compounds above a fixed threshold level of relative concentration in the essential oil, several chemotypes have been identified from Ocimum species Simon et al. 1990;Varga et al. 2017). Previously, Grayer et al. (1996) had proposed to describe the chemotype(s) based on all the major compounds constituting greater than 20% of the total essential oil, while many researchers have now considered compounds above 10% (Varga et al. 2017). Subsequently, Holm and Hiltunen (1999) have summarized the data on Ocimum species chemotypes until 1999. To the best of our knowledge, no such attempt was made summarizing all available chemotype data from Ocimum species till now. Hence, we explore the existing chemodiversity from essential oil of Ocimum species with potential causes, mechanisms, and the role behind such vast diversity. Additionally, various biotechnological approaches are discussed that can be employed for the chemotypic improvement with better essential oil yield and composition in Ocimum species.
Essential oil and chemical composition of Ocimum species
The essential oil of Ocimum species, also commonly known as basil oil, is biosynthesized and stored in a specific structure called glandular trichomes present on leaf, stem, and flower Werker et al. 1993). It is made up of secretory cell(s) containing the enzymatic machinery for essential oil biosynthesis and an oil sac for storage. There are two types of glandular trichomes, viz, capitate and peltate, which can be distinguished based on their size and number of the secretory cells (Werker et al. 1993). The essential oil can be obtained from the fresh, semi-dry, or dry aerial plant tissues at the flowering stage by steam distillation or hydro-distillation. The supercritical fluid extraction method is also used to avoid the loss of top notes from the essential oil during the distillation (Occhipinti et al. 2013). Interestingly, in several cases, the essential oil distilled solely from the flowers is superior in chemical composition and thus, of high market value. The essential oil content in leaves of Ocimum species generally varies from 0.5 to 1.4%. However, the composition of essential oil, its yield, and the content varies according to many factors, such as the variety, developmental stage, harvesting season, distillation method, geographical region, and climatic conditions of the plant used (Verma et al. 2013). The specialized metabolites like monoterpenoids, sesquiterpenoids, and phenylpropanoids majorly constitute the essential oil (Pandey et al. 2014). Their analysis and characterization from essential oil is conventionally carried out by gas chromatographymass spectrometry and recently with advanced liquid chromatography-mass spectrometry.
From a biosynthetic point of view, terpenoids are biosynthesized by the condensation of two isoprene precursors, isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP). These are derived from the mevalonic acid (MVA) pathway localized in the cytosol or 2-C-methyl-D-erythritol-4-phosphate (MEP) pathway localized in the plastid (Vranová et al. 2013;Zhou and Pichersky 2020). The principal monoterpenoids reported from Ocimum species essential oil are camphor, 1, 8-cineole, citronellal, geranial, geraniol, linalool, limonene, neral, ocimene, terpinene, and thymol. Similarly, the major sesquiterpenoids are bisabolene, bergamotene, caryophyllene, cadinol, farnesene, and germacrene. Till now, more than 100 monoterpenoids and over 135 sesquiterpenoids have been identified from the essential oil of several Ocimum species with diverse applications (Tables 1 and 2). The phenylpropanoids are derived from the amino acid phenylalanine, which is in a few steps converted to the 4-coumaroyl-CoA Iijima et al. 2004). Later, coniferyl and coumaryl alcohols derived from 4-coumaroyl-CoA act as precursors for biosynthesis of various phenylpropanoid derivatives (Gang et al. 2001;Lavhale et al. 2018Lavhale et al. , 2021Liu et al. 2015). About 10 phenylpropanoids have been reported from essential oil of Ocimum species, and among them, eugenol, methyl cinnamate, methyl chavicol, and methyl eugenol occur predominantly (Table 3). The structures of the main terpenoids and phenylpropanoids present in the essential oil from various Ocimum species are depicted in Fig. 1. These metabolites have culinary, industrial, consumer and therapeutic applications (Tables 1, 2, and 3).
Chemotypes reported from various Ocimum species and their hybrids
The chemistry of O. basilicum is most studied because of its worldwide distribution. Among the published reports on the chemical diversity in Ocimum species, most of the reports (43%) are available on O. basilicum, followed by O. gratissimum (16%), while minimum reports (0.3%) are available on O. adscendens Willdenow (Willd.), O. urticifolium Roth, and O. suave Willd. (Fig. 2). Globally, 16 Ocimum species have been analyzed for their essential oil composition across 55 countries. Subsequently, nine to ten Ocimum species cover the most chemotypes from Asia, followed by Africa, South America, and Europe (Fig. 3) S2). Furthermore, the species-wise predominant occurrence of specialized metabolites in the essential oil of different Ocimum species is shown in Fig. 4. Additionally, species-wise chemotype details are highlighted in the following section.
Additionally, few reports are from different intraand inter-specific Ocimum species hybrids (Table S9). These studies suggest that intra-and inter-specific hybridization, in general, can generate new spectra of VOCs in hybrids that may not be present in the parental species (da . For instance, linalool/methyl chavicol chemotype has been identified from the intraspecific hybrids between various O. basilicum cultivars. Further, hybridization also favored the generation of camphor, neral, geranial, bselinene, bicyclogermacrene, (E)-caryophyllene, and methyl chavicol (da Costa et al. 2016 (Table S9). Recently, natural hybridization between terpenoid-rich O. kilimandscharicum and phenylpropanoid-rich O. basilicum has led to two novel hybrids with the chemotype of methyl chavicol/blinalool (Gurav et al. 2020) (Table S9).
Potential causes of the vast chemodiversity in Ocimum species
The chemical composition at any developmental stage of the plant is determined by its genotype along with differential expression and regulation of genes involved in the biosynthetic pathways (Gonçalves and Romano 2013;Singh et al. 2015) (Fig. 5). For instance, O. gratissimum and O. tenuiflorum both are phenylpropanoid-rich species; however, higher expression of EUGENOL-O-METHYLTRANSFER-ASE (EOMT) in O. tenuiflorum leads to more methyl eugenol. As expected, lower EOMT expression results in eugenol-rich chemotype in O. gratissimum . Further, the genetic regulation of specialized metabolic pathways at the post-transcriptional and post-translational levels is considerably responsible for chemodiversity (Fig. 5). For example, a higher transcript and protein level of CHAVICOL-O- (E)-a-Bisabolene (Xie et al. 2008). The vast array of terpenoids in the Ocimum species is biosynthesized by different terpene synthases (TPSs) that have an exclusive ability to catalyze multiple product formation using a single substrate (Iijima et al. 2004) (Fig. 5). Additionally, both molecular and environmental factors affect chemical composition (Verma et al. 2013).
The existence of different chemotypes in Ocimum species could also be attributed to cross-pollination leading to intra-and inter-specific hybridization, resulting in higher variation in the chemical profiles (Gurav et al. 2020;Varga et al. 2017 (Khosla 1995). Also, natural evolutionary events, polyploidy, and selective breeding can play a significant role in chemical diversification as observed in different Ocimum species (Carović-Stanko et al. 2010a;Iijima et al. 2004). Similarly, the occurrence of either phenylpropanoid-or terpenoid-rich Ocimum species may be attributed to the diversification of pathways during the evolution . Thus, looking at such a massive chemodiversity, the next obvious point arises about why plant generates them?
Multiple benefits of chemodiversity to Ocimum species
Albeit humans have explored plant-derived aromatic compounds for their benefits, plants do not produce (Oxenham et al. 2005). Upon the herbivore attack, VOCs released by the infested plant may further induce the volatile emission from healthy leaves of the same plant or adjacent unchallenged plants (Baldwin et al. 2006). In Ocimum species, there is no such report of the metabolic priming on the neighboring plant; however, this metabolic priming results in a more rapid and intense defense response that can be mounted by healthy adjacent plants upon any subsequent herbivory attack (Engelberth et al. 2004;Kim and Felton 2013). Plants also compete with other nearby plants because of the allelopathic effect on their germination and growth through VOCs (Romagni et al. 2000
Potential adverse health effects of specialized metabolites present in Ocimum species
Though terpenoids and phenylpropanoids have been widely used in various applications, they enter the human body through oral, dermal, and nasal routes. Despite their many health benefits, some specialized metabolites like camphor, methyl eugenol, and methyl chavicol present in the essential oil of Ocimum species could have toxic effects after particular concentrations based on the data from in vivo and in vitro studies in model organisms or cell lines (Bristol 2011;Johnson et al. 2000;Zuccarini 2009) (Table 4). For example, methyl eugenol and methyl chavicol are reported of having genotoxic or carcinogenic potential at specific levels (Table 4). Interestingly, Ocimum species aqueous or organic extracts of a specific tissue or whole plant were found to be less toxic ( ten Ocimum species is reported from Asia, followed by nine Ocimum species from Africa and S. America mitochondrial function, and/or causing lipid peroxidation (Agus 2021). These specialized metabolites mainly exhibit hepatotoxicity as the reactive metabolites and ROS are formed during their metabolism in the liver (Zárybnický et al. 2018). Consequently, such specialized metabolites must be carefully utilized in various applications due to their acute or chronic adverse effects beyond a specific level. However, up to certain levels, these metabolites either in purified form or plant extracts as a bouquet of compounds, might be safe for human usage. Further, these plant-based natural molecules are recommended to be used in formulations and not in pure form.
Strategies to improve the chemotypes in Ocimum species
For many aromatic crops, genetic enhancement using various approaches aims to improve chemical composition, essential oil and herb yield. The classical breeding methods, along with biotechnological interventions, can facilitate such improvement in the yield and quality of essential oil in important and popular Ocimum species. This includes various approaches like metabolic engineering, transgenic, and in vitro culture techniques (Fig. 7), as further described. (Lal et al. 2018). Also, morphologically similar but chemically distinct breeding lines of O. basilicum have been established, with eugenol (line SW) and methyl chavicol (line EMX-1), as the only phenylpropanoid components in their essential oil (Gang et al. 2001). Similarly, O. basilicum lines distinguished with camphor, methyl chavicol, and eugenol chemotypes have been developed with high essential oil content and herbage yield (Gupta 1994). The allelic basis behind the inheritance of such specialized metabolites has been discussed in a few studies (Dudai and Belanger 2016;Gupta 1994). Additionally, some breeding studies have focused on the development of O. basilicum varieties with improved agronomic traits, such as cold tolerance (Ribeiro and Simon 2007;Römer 2010) and disease resistance against basil Fusarium wilt (caused by Fusarium oxysporum) (Dudai et al. 2002) and downy mildew (Peronospora species) (Römer 2010). For ornamental purposes, O. basilicum lines with compact inflorescence have been generated (Dudai et al. 2002;Morales and Simon 1996). Interspecific hybridization can be employed to generate stable new varieties within the three to four generations (Dudai and Belanger 2016).
Earlier studies have indicated the effect of ploidy levels on essential oil production such that polyploid plants have significantly higher essential oil accumulation than diploid ones (Dhawan and Lavania 1996;Lavania 2005). The polyploidy induction approach by treating the seeds or other propagating material with colchicine has been used for decades in crop improvement programs. Omidbaigi et al. (2010) induced tetraploidy in O. basilicum by colchicine treatment to seeds and apical meristem of seedlings. This resulted in a 69% increment in the essential oil content in the tetraploid plant compared to the diploid.
Additionally, polyploidy was generated in an interspecific hybrid between eugenol-rich O. gratissimum and thymol-rich O. viride that could produce eugenol (50-55%) and thymol (7-10%). Further, their selfing and selection led to the development of two lines, one with 80-85% eugenol, while another with 82-85% thymol chemotypes (Khosla et al. 1990). Thus, interspecific hybridization and polyploidy generation offer additional ways for the targeted breeding to modulate chemotypes and essential oil yield.
Metabolic engineering through transgenic approaches
Metabolic engineering requires an in-depth understanding of the specialized biosynthetic pathways. To improve the yield and composition of some of the most valuable terpenoids and phenylpropanoids in the essential oil of specific Ocimum species, pathway engineering can be explored as a targeted approach Fig. 6 The vital functions played by plant VOCs via plant-animal, plant-pathogen, plant-plant interactions with the surrounding environment, along with adaptation to various abiotic stress conditions. They perform a wide array of ecological functions, such as pollinator, seed disperser attraction, herbivore deterrence, pathogen defense, priming of defense response and allelopathic effect on the neighboring plant. They improve plant resistance to the variety of abiotic stress conditions via photoprotective, antioxidant, cold, and heat-tolerant action (Fig. 8). The identification and functional characterization of genes entailed in the biosynthesis of specialized metabolites as chemotypes is crucial to manipulate any steps in their biosynthetic pathways through metabolic pathway engineering. Recently, key genes involved in the specialized metabolite biosynthesis have been characterized from different Ocimum species. These include 4-COUMARATE-COA LIGASE (Ok4CL7 and Ok4CL15) (Lavhale et al. 2021) of the phenylpropanoid pathway, 3-HYDROXY-3-METHYLGLUTARYL-COA REDUC-TASE (OkHMGR) (Bansal et al. 2018) of the MVA pathway, and b-CARYOPHYLLENE SYNTHASE (OkBCS), a sesquiterpenoid synthase ) characterized from O. kilimandscharicum. Also, several genes from O. basilicum have been characterized, such as PHENYLALANINE AMMO-NIA-LYASE (ObPAL) (Khakdan et al. 2018) of the phenylpropanoid pathway, 4-HYDROXYPHENYL-PYRUVATE REDUCTASE (ObHPPR) and TYRO-SINE AMINOTRANSFERASE (ObTAT) involved in the rosmarinic acid biosynthesis (Li et al. 2019), OXIDOSQUALENE CYCLASES (OSCs) and cytochrome P450s (CyP450s) in the ursolic acid and oleanolic acid biosynthesis (Ghosh 2018). Anand et al. (2016) have characterized EUGENOL SYNTHASE (EGS) involved in the phenylpropanoid biosynthesis from several Ocimum species. The metabolic engineering techniques can facilitate particular manipulation in metabolite flux to achieve higher levels of the desired metabolites (Dudareva et al. 2013;Lange and Ahkami 2013;Marchev et al. 2020). HMGR enzyme from the MVA pathway and 1-deoxy-D-xylulose-5phosphate synthase (DXS) and 1-deoxy-D-xylulose-5phosphate reductoisomerase (DXR) enzymes from the MEP pathway determine the metabolite flux for isoprenoid biosynthesis (Rodríguez-Concepción 2006). In the study conducted by Xie et al. (2008), higher activity of enzymes from the MEP pathway has been correlated well with the high level of citral in O. basilicum (line SD). On the contrary, the high activity of PAL has been observed in O. basilicum (line EMX-1), which is rich in methyl chavicol. The transcriptomic, proteomic, and biochemical approaches have revealed the reduced carbon flux into the phenylpropanoid pathway resulting in the terpenoid-rich chemical profile of O. basilicum (line SD) (Xie et al. 2008). In a similar context, the higher level of terminal enzymes from the terpenoid biosynthetic pathways along with low levels of PAL could be attributed to the increased flux in the terpenoid biosynthesis (Iijima et al. 2004). Thus, directing carbon flux through the overexpression or silencing of critical enzymes from the entry, key intermediate or terminal points (Fig. 8) of either phenylpropanoid or terpenoid pathways will help to modify or improve the key chemotypes in Ocimum species.
Many transgenic plant species have been developed that produce increased levels of monoterpenoids through overexpression of TPSs using the constitutively expressing promoters in heterologous system (Aharoni et al. 2003(Aharoni et al. , 2006. Recently, overexpression of HMGR from terpenoid-rich O. kilimandscharicum (OkHMGR) in different phenylpropanoid-rich Ocimum species (O. basilicum, O. gratissimum, and O. tenuiflorum) has led to terpenoid accumulation with increased essential oil content (Bansal et al. 2018). Further, many enzymes of the metabolic pathway occur as isoforms. As PAL isoforms are localized in different subcellular sites, such as microsomal and cytosolic, it results in the differential subcellular distribution of cinnamic acid and, in turn, can partition phenylpropanoid biosynthesis into different end-product-specific pathways, such as flavonoids, lignin, etc. (Achnine et al. 2004). Also, different 4CL isoforms can regulate the flux of various hydroxycinnamic acids into other branches of phenylpropanoid biosynthesis (flavonoids, anthocyanins, phenylpropenes, lignins, coumarins, etc.) and thus, makes it a promising target for metabolic engineering in Ocimum species (Lavhale et al. 2018). For instance, the silencing of a specific 4CL isoform (OS4CL) through RNAi in O. tenuiflorum has led to a reduction in the eugenol level without affecting lignin and sinapic acid contents (Rastogi et al. 2013). Recently, characterization of two 4CL isoforms (Ok4CL7 and Ok4CL15) from O. kilimandscharicum have revealed that Ok4CL7 utilizes p-coumaric acid, ferulic acid and caffeic acid. In contrast, Ok4CL15 uses p-coumaric acid, ferulic acid and sinapic acid as substrates, indicating their potential role in lignin and phenylpropanoid biosynthesis (Lavhale et al. 2021). Overall, such reports have demonstrated that the desired change in the chemotypic profile can be achieved by targeting the specific isoform of the enzyme.
Furthermore, the transcription factors (TFs) play a pivotal role in regulating a specialized metabolic Genotoxicity (
Sesquiterpenes Triterpenes
Overexpression Or Silencing and MsMYB from M. spicata resulted in the reduced production of specialized metabolites, indicating their role as repressors (Reddy et al. 2017;Wang et al. 2016). Thus, metabolite flux analysis and interventions for suppressing the expression of TFs, which negatively impact the pathway, could also enhance the yields.
The recently discovered and Nobel-winning RNAguided genome editing technique, clustered regularly interspaced short palindromic repeats/CRISPR-associated 9 endonuclease (CRISPR/Cas9), is a potential tool for crop improvement owing to its high efficiency, simplicity, and specificity (Arora and Narula 2017).
Metabolic engineering by targeting multiple genes can be achieved through the multiplex CRISPR/Cas9 system to turn plants into bio-factories for specialized metabolite biosynthesis (Bhambhani et al. 2021;Karkute et al. 2017). Several genes that encode for the enzymes involved in the biosynthesis of many specialized metabolites are present in a cluster on the chromosomes, and the CRISPR/Cas9 tool has proven as an efficient method for knock-in or knock-out of gene clusters (Bhambhani et al. 2021). Plants with polyploidy show multiple homologs of the gene of interest can be targeted through sgRNA-based CRISPR/Cas9-mediated genome editing (Wilson Fig. 8 Metabolic engineering approaches for manipulating the desired chemotype by either increasing or decreasing the levels of specific metabolites include overexpression or silencing of the enzyme(s) (represented by W, X, Y, and Z, which are involved in the biosynthesis of specialized metabolites represented by B, C, D and E) either at the entry point, important middle step(s), branch or terminal points in a specialized metabolic pathway. Metabolic engineering approaches for the overexpression of enzymes include target gene expression under constitutive promoter and CRISPRa for gene upregulation. In contrast, downregulation of enzymes in the pathway can be achieved through RNAi at the translational level, CRISPRi at the transcriptional level or CRISPR knock-out. Expression of TFs, which act as either positive or negative regulators of the pathway, can also be manipulated to achieve the desired level of chemotype in the chosen Ocimum species. CRISPRa: CRISPR activation, CRISPRi: CRISPR interference, dCas9: Nucleasedeactivated Cas9, gRNA: guide RNA, miRNA: micro RNA, RISC: RNA-induced silencing complex. This figure is created using BioRender.Com et al. 2019). Thus, a similar approach can be used in the Ocimum species where polyploidy is reported for enhancing desired metabolites. The CRISPR/Cas9 has been applied recently in metabolic engineering to produce specific metabolites (Fig. 8) in medicinal plants; for example, knocking out 4 0 -O-METHYL-TRANSFERASE 2 (4 0 -OMT2) gene from the benzylisoquinoline alkaloid pathway has resulted in the reduced production of morphine, thebaine, etc. in Papaver somniferum (Alagoz et al. 2016). Also, the biosynthesis of diterpenoid tanshinones was blocked by targeting the diterpene synthase gene (SmCPS1) in the Chinese medicinal plant Salvia miltiorrhiza, which diverted geranylgeranyl pyrophosphate (GGPP) to taxol biosynthesis (Li et al. 2017). Recently, Navet and Tian (2020) (Anand et al. 2019;Singh et al. 2020) are available. Consequently, such resource availability can boost the mass production of crucial chemotypes from the selected Ocimum species by targeting specific biosynthetic pathway genes through CRISPR/Cas9 technology or other genome editing approaches where sequence information is a prerequisite. Thus, CRISPR/Cas9 studies will have an enormous potential for chemotype improvement in these Ocimum species. Additionally, the integrative analysis using transcriptomic, proteomic, and metabolomic approaches will give a system-level framework for identifying crucial genes or pathways involved in the biosynthesis of specialized metabolites and their regulation, and subsequently, this may speed up the process of advancement in Ocimum species to improve the quality and yield of essential oil. Recently, the integration of transcriptomics with metabolomics has helped to discover the tissuespecific biosynthesis and compartmentalization of major metabolites, like camphor and eugenol in O. kilimandscharicum (Singh et al. 2020), which needs to be further explored.
The anatomical structures, where the essential oil (represents only the content of volatiles in the anatomical structures recovered by steam distillation) is biosynthesized and stored in Ocimum species, can be targeted to improve chemotype contents. The types of glandular trichome (peltate and capitate), their size and density can affect the net efficiency of essential oil accumulation ) as the level of secretion is relative to trichome size (Huchelmann et al. 2017) and density (Deschamps et al. 2006). The methyl chavicol accumulation pattern from O. basilicum leaf tissue correlated well with the peltate gland density and CVOMT expression in the peltate glands at different developmental stages (Deschamps et al. 2006 (2017) exhibited an increase in the trichome density along with increased artemisinin content when a MYB TF (AaMYB1) was overexpressed in Artemisia annua plant. Also, exogenous treatment of phytohormones (gibberellic acid and calliterpenone) in M. arvensis induced the formation of a greater number of trichomes with the increased diameter, which resulted in an increased essential oil accumulation with high menthol and menthone contents (Bose et al. 2013). Recently, transcriptomic analysis of O. basilicum and O. tenuiflorum was carried out to identify genes involved in the glandular trichome development concerning the essential oil biosynthesis. Most of the transcripts belonged to the TF families, such as bHLH, C2H2, R2R3MYB, and R3MYB, which regulate trichome development. Their higher expression in O. basilicum than O. tenuiflorum may be associated with the high essential oil content of O. basilicum (Chandra et al. 2020). Thus, all such reports reveal that higher accumulation of essential oil can be facilitated by the large size and high density of trichomes. Moreover, the development of the glandular trichome is driven by a TF interactome network, which can either act as an activator or inhibitor (Lange and Turner 2013). Consequently, the characterization of such interactome to modulate anatomy and density of glandular trichomes in Ocimum species for the biosynthesis and storage of higher quantities of essential oil with important chemotypes would be a great biotechnological challenge in the future.
In vitro tissue-culture techniques for Ocimum species Many Ocimum species have been successfully regenerated using in vitro propagation (Dode et al. 2003;Manan et al. 2016;Rady and Nazif 2005;Saha et al. 2010;Singh and Sehgal 1999). In addition to this, the use of elicitors in callus, cell, and organ cultures for the overproduction of the specialized metabolites is an effective strategy (Fig. 7) for chemotype improvement (Namdeo 2007). For example, the callus culture has been more influential in the production of betulinic acid than in vitro derived leaves from O. basilicum, O. kilimandscharicum, and O. tenuiflorum (Pandey et al. 2015). The light quality also has strongly influenced the phenylpropanoid biosynthesis (Nadeem et al. 2019;Nazir et al. 2020b), while exogenous melatonin is effective in phenolics production from the callus cultures of O. basilicum (Duran et al. 2019;Nazir et al. 2020a). Furthermore, differentiated plantlets or organ culture is beneficial for metabolite production with higher and stable essential oil yield (Karuppusamy 2009). Particularly, shoot culture has proven the best option for the higher accumulation of specialized metabolites than cultivated plants (Murthy et al. 2014). Methyl chavicol level was higher in the essential oil from the in vitro propagated O. basilicum than ex vitro and in vivo plants (Manan et al. 2016). Also, in vitro grown leaves and somatic embryos had higher quantities of eugenol than field-grown O. basilicum and O. tenuiflorum leaves (Bhuvaneshwari et al. 2016). However, cell culture can be superior for the production of metabolites with a higher yield by scaling up the cell culture (Nitzsche et al. 2004). Mathew and Sankar (2014) have reported higher total terpenoid content in cell culture in the presence of an elicitor than field-grown O. basilicum, O. gratissimum, and O. tenuiflorum plants. Similarly, leaf-derived suspension cultures accumulated 11-fold higher rosmarinic acid than callus cultures or leaves from the field-grown O. basilicum plants (Kintzios et al. 2003). With the treatment of elicitors and precursor feeder, the accumulation of total phenylpropanoids has been elevated in suspension cell cultures with correlated PAL expression in O. tenuiflorum (Vyas and Mukhopadhyay 2018). Likewise, recently, higher triterpenoids (such as betulinic acid, ursolic acid, oleanolic acid, and rosmarinic acid) production has been achieved in the O. basilicum suspension culture (Pandey et al. 2019). In O. basilicum, high levels of nepetoidins have accumulated in callus and suspension cultures (Berim and Gang 2020). Subsequently, 2.7-fold high linalool and a 50% rise in methyl chavicol have been observed with silver nitrate as an elicitor in cell suspension cultures from O. basilicum (Açıkgöz 2020).
The hairy roots induced by Agrobacterium rhizogenes mediated transformation are efficient for specialized metabolite production (Murthy et al. 2008). These are genetically stable and can grow in media devoid of growth regulators. They have a high growth rate and can produce particular metabolites from the plant's aerial part (Srivastava and Srivastava 2007). For instance, the enhanced levels of ursolic acid and eugenol in hairy root cultures of O. tenuiflorum have well corresponded with concentrations and duration of exposure of elicitors and the age of the cultures (Sharan et al. 2019). Biswas (2020) has shown the enhanced rosmarinic acid content using methyl jasmonate as an elicitor in the non-transformed O. basilicum root culture. Further, under both light and dark conditions, rosmarinic acid accumulation is higher in hairy root cultures from the green basil cultivar of O. basilicum than those of the purple basil cultivar (Kwon et al. 2021). Previously, elite hairy root lines have been developed with significantly higher rosmarinic acid levels than non-transformed roots of O. basilicum (Srivastava et al. 2016). In addition to this, somatic hybridization is used to produce hybrids from related species or distant genera (Grosser et al. 2000). Somaclonal variations can be helpful to enhance the essential oil profile of Ocimum species. These variations, if genetically stable for many generations, can be incorporated through plant breeding techniques (Krishna et al. 2016). Biotransformation is another approach that can be used to accumulate metabolites of particular stereospecificity and regioselectivity, utilizing cell or organ culture (Giri et al. 2001).
Conclusion and future perspectives
Several chemotypes from different Ocimum species have been reported with a multitude of medicinal, culinary, and industrial applications. The approaches of classical breeding, interspecific hybridization, and tissue culture have been fruitful in increasing the total essential oil content as well as developing specific chemotypes in Ocimum species till now. Still, globally there is a high demand for naturally occurring specialized metabolites. Although several of such commercially important metabolites can be chemically produced, synthetic products are often left with racemic mixtures, while the natural compounds are free of such manufacturing defects and leftovers. It is important to understand adverse effects (if any) of these metabolites to fine tune the concentrations in the final products or define dose. Hence, to mitigate such market needs, the recent biotechnological interventions and synthetic biology tools have an outstanding potential for the chemotypic improvement of Ocimum species for their economic expansion. An enhanced chemotypic profile in Ocimum species could also improve other traits, such as tolerance to abiotic stresses, disease resistance to phytopathogens, pest control, an allelopathic effect for weed control and phytoremediation potential. The current genome editing tools will also help us to understand the biosynthetic pathways of specialized metabolites and provide an ideal option to improve essential oil yield and quality. However, the lack of whole genomic and transcriptomic sequences from important Ocimum species will be a challenge to exploit hidden chemopotential and chemotype advancements using genome editing tools. Further, identification and characterization of the TF networks regulating specialized biosynthetic pathways and correlating them with the metabolome will be necessary for effective TF manipulation in chemotype improvement. Nevertheless, comprehensive metabolomic profiling of various organs and organelles will bring more exciting information on fine-tuning of biosynthetic pathways for important specialized metabolites. Thus, extensive research aimed at the functional analysis of genes involved in the biosynthesis, regulation, and transport of specialized metabolites would be indispensable to enhance the market value of several Ocimum species and their chemotypes. | 2021-08-04T05:26:24.880Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "60223e5fcc9825f3abf488606642c96ac2973c28",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11101-021-09767-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "52cde25edf799e5aa07ee1146976b6f62b078260",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237738995 | pes2o/s2orc | v3-fos-license | QURAISH SHIHAB’S QURANIC EXEGESIS ON INTERRELIGIOUS HARMONY AND ITS RELEVANCE TO THE CONTEMPORARY WESTERN HERMENEUTICS
: This paper discusses Quraish Shihab's Quranic Exegesis and its relevance to the hermeneutics framework of Martin Heidegger and Jurgen Habermas to trace Islamic moderation in Indonesia. The issue of interreligious harmony is the main theme of discussion. The type of library research is used in this research, where data is drawn from books, journal articles, and audio-video files. This paper is divided into three parts. The first part presents Quraish Shihab's qur’anic exegesis on inter-religious harmony, which was delivered at Lentera Hati and written in some of his works. The second part of the paper discusses Heidegger's facticital hermeneutics and Habermas's critical hermeneutics. The third part tries to integrate the qur’anic exegesis of Shihab with the hermeneutics concepts of Heidegger and Habermas. This effort of relevance is divided into two points of analysis. The first point juxtaposes Shihab and Heidegger in existentialist philosophical analysis. The second point juxtaposes Shihab and Habermas in intersubjective communication analysis. The paper shows the stringing network of meaning expressed by Quraish Shihab with contemporary western hermeneutics. Therefore, the paper argues that the religious thought of contemporary Indonesian exegete, M. Quraish Shihab, is relevant to the philosophical thoughts of contemporary philosophers, such as Heidegger and Habermas.
Introduction
THE SITUATION of contemporary civilization, where the intensity of human relationships around the world is closely intertwined, demands identity management wisely, on the scale of both self and community. Leonard Swidler called the evolution of this relationship as a change from the age of monologue to the age of dialogue. 1 One of identities attached to most of people around the world is religion. Management of religious identity in this contemporary era brings out a new motto: to be religious today is to be interreligious. 2 A believer can only describe his/her religious identity in relation to the others. The identity image of a religious person relates to his/her interaction with other people having different faith and religion. Relationship with the others is a part of his/her religious identity.
M. Quraish Shihab (b.1944), a contemporary Indonesian qur'anic exegete, describes inter-human relationship of religious identity in pluralistic Indonesian context as inter-religious harmony. Annotations of his works can be traced in several writings. Most of them related to qur'anic studies. Some say that Quraish Shihab is a scholar having a tremendous spirit and effort to integrate Islamic religious messages with contemporary context. 3 The framework of contemporary civilization, however, has been mixed in such a way by many philosophers, both modern and postmodern. Among the influential philosophers are Martin Heidegger (1889Heidegger ( -1976 and Jurgen Habermas (b.1929). The hermeneutics concept of these two figures has contributed in coloring the image of contemporary civilization. Heidegger focused on the concept of self, while Habermas focused on the concept of social relation. Heidegger is remembered by his facticital (faktizitat) hermeneutics, while Habermas is known for his critical hermeneutics. The discussion concerning hermeneutical link between Quraish Shihab and these two philosophers is possible because Quraish Shihab has used several philosophers' thoughts in his various works. There are at least 25 names of philosophers spread on each of his magnum opus qur'anic exegesis volumes, This research seeks to string up networks of meaning expressed by Quraish Shihab as he talked about inter-religious harmony in a television program, Lentera Hati, aired on Metro TV, with the hermeneutics concept of Heidegger and Habermas. This paper is divided into three parts. The first part presents a Quraish Shihab's talk on interreligious harmony, which is delivered at Lentera Hati on Metro TV. The event which lasted about 30 minutes was divided into two sessions: monologue and dialogue. To find out a deeper Quranic interpretation of the Quraish Shihab on interreligious harmony, the data obtained from the event was supported by data from various works of Quraish Shihab on similar themes. The second part of the paper discusses Heidegger's facticital hermeneutics and Habermas's critical hermeneutics. The third part tries to integrate the argument of Quraish Shihab with the hermeneutics concept of Heidegger and Habermas. This hermeneutical linkage analysis is divided into two points: The first point juxtaposes Shihab and Heidegger in existentialist philosophical analysis; while the second point juxtaposes Shihab and Habermas in an intersubjective communication analysis. As the result, as well as the argument of this paper, is that the religious thought of contemporary Indonesian exegete, M. Quraish Shihab, is relevant to the philosophical thoughts of contemporary philosophers, such as Heidegger and Habermas.
The paper is based on library research. The data obtained in this research are from library materials, such as books, journal articles, and audio-visual files. Two criteria in material selection are in consideration: a) the principle of recency; b) the principle of relevance. 5 These two criteria are used in selecting various reference sources, both those related to the figures discussed in this study and those related to the theme of research on interreligious harmony. The theoretical framework of this research is critical discourse analysis. Discourse is understood as a complex set of relations including relations of communication between people who talk, write and in other ways communicate with each other. It also describes relations between concrete communicative events, such as conversations and newspaper articles. However, there are also relations between discourse and other such complex objects, such as physical world, persons, power relations and institutions, which are interconnected elements in social activity or praxis. 6 The discourse produced by Quraish Shihab when he talks about inter-religious harmony in the television program, Lentera Hati, does not stand alone, but relates to the context of human life in general, especially with regard to religious diversity. The context surrounding the Quraish Shihab is the diversity of religions in Indonesia, where the social order must be arranged in such a way so that human life can be in harmony. By analyzing the relevance of Quraish Shihab's Quranic Exegesis on Interreligious Harmony with the Contemporary Western Hermeneutics, such as Heidegger and Habermas, it will appear that the diversity of views must be appreciated and well managed in social life.
Quraish Shihab on Inter-Religious Harmony
Quraish Shihab was born in South Sulawesi, 1944. In 1992, he was a Chancellor of IAIN Syarif Hidayatullah, Jakarta, after previously serving as Vice Rector for Academic Affairs. In 1998, he was appointed by President Soeharto as Minister of Religion of the Republic of Indonesia in the VII Development Cabinet. Quraish Shihab was a scholar-thinker who was very productive in producing written works. Academically, he is very consistent in his path; the study of Qur'an and its exegesis. Some of his works two sessions: monologue and dialogue. In the monologue session, Quraish Shihab, as popular Indonesian qur'anic exegete, explored many qur'anic verses as well as Muhammad tradition related to inter-religious life.
Shihab said that according to Muslim's belief, God is Omnipotent. Does God have ability to make all mankind in one religion only? Yes, He does. If so, why do religious differences occur? Why are there Muslims, Christians, Hindus, Buddhas, various religions and many beliefs? It must be His will. It is the main principle that must be implanted in the hearts and minds of everyone who believes in God that God is Almighty. This principle is repeatedly written in the Qur'an: In his Tafsir al-Mishbah, Shihab describes this verse as follows: And if your Lord, O Muhammad, who has always been doing good and guiding you wills, of course He will make all human beings one, namely adhering to only one religion and submitting to Allah, just like the angels, but Allah does not want that, so that the human kind does not become one people. Allah gave them the freedom to sort and choose so that they always disagree, even though related to the main religious issues which should not be disputed (Qs. Hūd 11:118). 16 How if God wanted to make all human beings in one religion only? Shihab argued: ‚One way God may do is repealing human ability to sort and choose. He will make humans like angels. However, God does not want it. Humans get freedom. They may select and learn their own religion. They may also know other religions. They may choose to be Muslims or other religion's adherents. Based on this consideration, all people should respect any choices taken by individual, because God gives humans freedom.‛ Shihab interpret this verse as follows: So, whoever among you, or other than you who wants to believe in what I am (Muhammad) conveying, let him believe, the benefits will return to himself, and whoever among you and other than you want to disbelieve and reject Allah's messages, then let him be an infidel, no matter how rich and high his social position may be . 17 Humans are responsible for their own choice. It is one of the principles that every religious person must realize. Shihab asserted that humans should not be more enthusiastic than God. God has given humans freedom. Humans should also give similar freedom to the others.
Interpreting this verse, Shihab said that for each people, namely groups who have the same time, or race or other similarities among you, O mankind, We give rules which are the source to eternal happiness and the bright path to that source . 18 How does God give a law and an open way? Shihab gives a description as follow: God, Allah, has sent messengers everywhere to give warnings and good news. Qur'an states: And there never was a people, without a warner having lived among them (in the past) (Qs. Fāṭir 35:24). Based on this verse, Shihab asserted that there were warners In Indonesia. The prophets are numerous, while the last prophet according to Muslim's belief is Prophet Muhammad. These prophets came with their teachings. After they left, their teachings were developed by their followers as new problems arose. Sometimes, the development was in accordance with the principles of teachings. But on the other hand, there was also an inappropriate development. God gives each one the system set by God through the development activities of the followers. Some say: It is my law; It is my method. Muslims have particular law and method. Christians have particular law and method. Hindus also have the same pattern. Each one may follow its respective method. Qur'an states: Shihab explained this verse in his work Tafsir Al-Mishab as follow: If Allah willed, surely He will make you, O people of Moses and Jesus, Muhammad's people and other people before that, one people only, that is by instinctively uniting your opinion and not giving you the ability to choose, but He, Allah does not want that. Because, He wants to test you, that is, treat you to the treatment of people who want to test what He has given you, both concerning the law and other potentials, in line with the differences in potential and His grace to each (Qs. al-Maidah 5:48). 19 God sent the prophets, such as Noah, Moses, Isa and Muhammad. The prophets had given religious teachings to humankind. Humans should follow and develop the prophets' commands. God indeed tested humans whether they follow or deny what they have developed.
Based on this verse (Qs. al-Mā'idah 5:48), Shihab then talked about the principal of God's teaching, that all religious people are in the race in virtues: Muslims are in that kind of racing. So do Christians, Buddhis, Hindus, and Jews. If they can do the virtues together, they should do it.
Exploring the message of a hadith, Shihab said: ‚One of greatest sins is a person insulting his father. It did not make sense to the companions of the prophet. How does someone insult his father? The companions asked. The prophet replied: Someone cursed a father of another person. As the result the person replied cursing his father. Therefore, religion prohibits cursing other people's parents; reviling other people's prophet; and, insulting other people's teachings, because it can invite replies insulting.‛ According to Shihab, this verse is only addressed to the community of Muslims: And do not, O Muslims, curse idols that they worship besides Allah, because if you curse them then as a result, they will also scold Allah by transgressing or hastily without thinking and knowledge (Qs. al-An'ām 6: 108). 20 Muslims should never revile, curse, and insult other people's God. Muslims should not blame the religious teachings in front of the adherents. Muslims should not bother them. If so, they will also insult Allah, Muslim's God. They will curse Muslim's teaching. The Qur'an states: Did not God check one set of people by means of another, there would surely have been pulled down monasteries, churches, synagogues, and mosques, in which the name of God is commemorated in abundant measure (Qs. al-Ḥajj 22:40). Based on this verse, Shihab said that the enemies of religion will destroy the places of worship. Therefore, religious people should work together. They are obliged to keep, for example, a church. Because if someone able to destroy it, someone else may destroy a mosque. It is a religious teaching written in the Qur'an. If someone wants to be respected, he or she should respect others as well. These are principles of harmony taught in Islam.
Furthermore, Shihab relates the concept of 'adl and iḥsān in the contexts of inter-religious relation. 'Adl or justice is giving one's rights as they are. Iḥsān moves beyond the rights. In iḥsān, someone takes only part of his or her rights, and gives more the rights of others.
Say: "You will not be questioned, that is, you will be held accountable for the sins we have committed if you consider our Islam is a sin and we will not be asked about what you are doing and will do." (Qs. Saba' 34:25). 21 Say: "Our Lord, that is Allah, will gather us all together, then He will make decisions between us fairly and correctly. And He is the All-decision-Giver, All-Knowing." (Qs. Saba' 34:26). 22 Shihab asserted that anyone should not be more enthusiastic than God. God has given humans freedom. People should find a common ground among them, that is by no disputing the truth and fault in the context of social life. Doing this concept, they will live in harmony and peace. That is, Shihab said, the principles of harmony in the teachings of Islam.
What Shihab explained about interreligious harmony at the Lentera Hati is in line with what he expressed in several of his works on similar topics. In his work ‚Membumikan‛ Al-Quran, Shihab states that by exploring religious teachings, leaving blind fanaticism, and based on reality, a path of coexistence and harmony can be formulated. Aren't monotheistic religions with the teachings of One Godhead essentially embracing universalism? It is God Almighty who created all human beings. All humans come from one lineage, regardless of religion, nationality or race. 23 In his work Wawasan Al-Qur'an, Shihab linked harmony and democracy. He stated that Islam comes not only to maintain its existence as a religion, but also to recognize the existence of other religions, and to give them the right to live side by side while respecting the adherents of other religions. God has given freedom to humans to choose for themselves the way they think is good, and to express their opinions clearly and responsibly. It can be concluded that freedom of opinion, including freedom of choice of religion, is a right that is bestowed by God on every human being. What Qur'an says is the seeds of democracy. 24 The Shihab's explanation about interreligious harmony above may and should be placed in the Indonesian context. It is very clear that the correlation of the text and context is very close. Indonesia is a country inhabited by followers of various religions. World religions live and develop in Indonesia, such as Islam, Catholicism, Protestantism, Hinduism, Buddhism and Confucianism. Furthermore, there are also many local belief systems in Indonesia that co-exist with adherents of world religions. These local beliefs even existed in Indonesia before the world's major religions came to the archipelago. 25 In facing this diversity, the founding fathers of the Indonesian nation proclaimed the motto of unity in diversity. The motto of Bhinneka Tunggal Ika is really a great example from Indonesia how people from different communities, ethnic groups, cultures and religions are able to unite, communicate and act together to create a better life. In my view, Bhinneka Tunggal Ika is the bond of Indonesian people. Although we come from different perspectives, we can live under the same umbrella. Indonesian Muslims, Hindus, Buddhist, Christians and many other spiritual sects are tied together. 26 Martin Heidegger (1889-1976 hermeneutics: being and language. This paper will only discuss the first theme. The concept of being was his fundamental question and attention throughout his magnum opus work. 27 Hardiman said that the core thought of this work is about daily mystique. 28 Heidegger also discussed deeply on language. 29 The theme of being is triggered by phenomenological approach. Heidegger used many terms, such as Dasein or Beingthere, as well as Being-in-the-world, 30 referring to human fate, as his disapproval abstraction of the traditional view of duality: subject and object, in understanding. For Heidegger, in a state of 'just thrown away', human, as Dasein, is intertwined closely between subject and object. 31 That experience of 'just being' is referred by
The Facticital Hermeneutics of Heidegger and The Critical Hermeneutics of Habermas
Heidegger to facticity (faktizitat). 32 The hermeneutics of Heidegger is called facticital hermeneutics because, for Heidegger, understanding (verstehen) is not a cognitive act, 33 but the act of primordial from Dasein which is pre-cognitive. The facticital hermeneutics is in charge of interpreting such primordial actions by allowing understanding as facticity to manifest itself. This is where the influence of phenomenology is seen in Heidegger's thought. 34 Understanding is a whole disposition in one's way of life, which is then called prestructure of understanding or presupposition. It is formed from the totality of Dasein's involvement in the practices of a life. It is non-thematic, pre-predicative, and non-verbal. Dasein is really involved in the practices. From this involvement, understanding is growing. Thus, humans are hermeneutical beings. An interpretation is directed by the unconscious pre-cognitive disposition. Within each interpretation, there is pre-structure of understanding which is directing the interpretation. This explanation shows that understanding implicitly precedes interpretation. Interpretation is explicating an implicit understanding.
Due to the existence of pre-structure of understanding, interpretation thus involves three steps: 1. Lifting to consciousness; 2. Clarification of meaning; 3. Displaying the invisible. In the first step, interpretation means connecting with the consciousness that is formed from the practice of everyday life. In the second step, interpretation means clarifying the meaning that comes from this consciousness. While in the third step, interpretation means to reveal something that is hidden. From this process, it is expected to emerge disclosure. The term Aletheia refers to this kind of disclosure. As facticity, interpreting always directs to the future because Dasein is temporal who is anticipating his own possibilities. Interpreting is projective. Pre-structure of understanding is thus having orientation to the future. Interpreting is revelation of meaning for the future and its new possibilities. In this sense, the truth is not a correspondence between the meaning of the text and the reality, nor it is a coherence within the text itself, but unfolding meaning that occurs in existential encounter between the reader and the text.
35
Jurgen Habermas (b.1929) focuses on the problem of intersubjective communication in public sphere. 36 The Habermas's thoughts criticize and fulfill the lack of previous theories. Understanding may be controlled by the processes of power. Habermas also criticized the claim of the universality of ordinary hermeneutics, led by Heidegger and his disciple Gadamer, by pointing out the boundaries of ordinary hermeneutics on two things: the monologic language of the natural sciences and systematically distorted communication.
37
In distorted communications, Habermas refers to two cases: first, psychopathological case; second, collective behavior case as a result of indoctrination. This Habermas's view is then referred to 'Critical Hermeneutics', where he became the most powerful and influential figure of the second generation of the Frankfurt tradition. 38 The tool of analysis used by Habermas in constructing his critical hermeneutics is Freud's psychoanalysis and Marx's ideological critique (domination and repression). 39 The second case, collective behavior as a result of indoctrination, is more complicated than the first case. In this second case, actors and speakers do understand their language and behavior, but their utterances and behavior are not produced by common sense, but by the effects of ideological indoctrination. It is called Falsches Bewubtsein or false consciousness. It is so-called 'systematically distorted communication'. It means that the communication of the actors has produced a system of misunderstanding that makes them unaware of mutual misunderstanding causing systematic distortion in their communication. They do not realize that their speech and behavior have been co-opted by a greater power, namely ideological indoctrination.
Furthermore, critical hermeneutics moves to communicative action. 40 Communication opens the way for mutual understanding among actors. It is the central idea of Habermas's communicative theory of action. Consensus or collective agreement is the result of this kind of communication. The way to reach the consensus is that all actors have to have desire to do dialogue. The actor may propose ideas with arguments and evidence (Habermas terms it as the validity claims or claims of truth). 41 In that way, he has to accept to be criticized. He has also to accept the truth coming from others. Thus, the subjective truth claim of each actor will meet common ground. The debate of rational argumentation will culminate to the most reasonable interpretation. This most reasonable interpretation should be accepted by all actors, because it tends to inter-subjective truth, namely agreement or consensus.
To reach a consensus on the claim of truth, there are four requirements that should be fulfilled. The truth should be: (1) understandable, (2) aspect, the truth in the consensus must be understood by all people involved in it. The second aspect is that the truth in consensus must be objective and factual universal, not subjective. The third aspect is that the truth in consensus must be in accordance with the norms and values that apply to the local area. This is to ensure that consensus reflects local wisdom. The fourth aspect is that the truth in consensus is not only derived from the shared experience of all people involved in it, but also related to their honesty.
The Relevance: Stringing Network of Meaning
In this section, the Quraish Shihab's exegesis on inter-religious harmony, and the hermeneutics framework of Heidegger and Habermas, are attempted to be intertwined. The relevance of Quraish Shihab's and Heidegger's perspectives result in analysis of existentialist philosophy, while the relevance of Quraish Shihab's and Habermas's perspectives result in analysis of intersubjective communication.
The Analysis of Existentialist Philosophy
The relevance of Quraish Shihab's and Heidegger's perspectives lies in their respective descriptions of human individuality. Quraish Shihab describes the freedom that each individual has to live according to what he/she wants, including about choosing to embrace a religion. On the other hand, Heidegger also describes human individuality where each person interprets his existence as a human through a different prestructure understanding for each individual.
Quraish Shihab asserted that if God wanted, He made all human beings in one religion only. One way He could do is by repealing human ability to sort and choose. He would make humans like angels. But God does not want it as stated in the Qur'an chapter Hūd 11:118 and chapter al-Mā'idah 5:48. Everyone is thus given a freedom to understand and interpret life, which then continue on the freedom of choosing a religious identity.
Furthermore, based on the Qur'an chapter Maryām 19:95, Quraish Shihab affirms the individuality of responsibility. Every human who lived since the prophet Adam until the last day, during his adulthood, and has known the teachings conveyed by the Prophet, will be held responsibility. Every human bear responsibility for his/her choice. Humans come to God individually.
Concerning the nature of individuality, Heidegger said that understanding religion is different from knowing theology. Theology is as a technic. It is cognitive, not existential. Understanding religion is an existential way based on particular religion. Each individual life based on the lifestyle of his/her religion. It is more primordial than the articulated teachings of faith, such as theology.
Based on Heidegger's thought, choosing a religious identity is thus not related to cognitive. As Dasein who is 'thrown away' and 'being-in-the-world', human is intertwined closely between subject and object. The abstraction of duality collapsed. The totality of Dasein's involvement in the practices of life shapes pre-structure of understanding or presupposition, which in turn drives people to interpret the text. One of these 'texts' is selection of religious identity.
Cognitively, if a person has known the categorization of A or B, he can identify and choose between the two. But it is not so with the selection of religious identity. The issue of religion is not solely related to cognitive matters. It is not surprising if someone who is expert in the field of a particular religion, but he/she is not a follower of that religion. There are many Christologists but they are not followers of Christianity. There are many Islamologists but they are not Muslims. Understanding religion, again, is not same as knowing theology. Heidegger asserted it. On this issue, Quraish Shihab also asserts that every individual is given a freedom to understand and interpret life, including the freedom to sort and choose religious identity.
Many people are not in the position of choosing their religious identity. They get it as heritage identity. This background influences people's way of thinking. The practices of religious life form a certain presupposition that tends to a certain direction. It is the cause that someone interprets life in Islamic, Christian, Jewish, Buddhist, Hindu, Confucian, or any other belief system, depends on the pre-structure of understanding of each one.
In a broader scale, each person will be given same respects despite of different articulation. Every human will be viewed as an autonomous individual. Each articulation will be judged based on the presupposition that is formed from the totality of human involvement in practices of life he/she lived. Any form of practices is legitimate, because human is Dasein thrown away into the world, without asking and expecting previously.
On this issue, the hermeneutics of Heidegger teaches that every human should be viewed as a concrete entity bringing respective presupposition. It is different from positivistic logic that tends to duality framework, subject and object, causing a response to another as an abstract entity. Related to the interpretation of texts, both written and unwritten, the hermeneutics of Heidegger more appreciates interpreters with their result of interpretation, whatever the level of shallowness and deepness. This right is based on the thought that Dasein is temporal, and therefore understanding is also evolving.
Along with the temporality of Dasein, interpretation is therefore indeed never stopped. The truth in interpretation is not identified as a constancy. The truth in interpretation will continue to expand following the temporality of Dasein. For human, the truth is assumed as a continuously widespread understanding without limit. Reality is an endless possibility, which reveals itself constantly. One should not worry on the objectivity of his/her interpretation, because the hermeneutics does not hunt objectivity but deepness. When someone interprets, the result of the interpretation in a particular period is not assumed as an absolute truth. A similar text might be interpreted wider than previous in accordance with the temporality of one's existence.
In line with above explanation, Shihab emphasized the importance of the word fardan (alone) in Qur'an chapter Maryām verse 95. God asserts that everyone has respective responsibility on what he/she has done. However, God will see humans with justice in accordance with the respective presuppositions. God would not only pay attention to the articulation, but also to how the articulation is constructed. Such thing lies on the presupposition formed from the totality of one's involvement in practices of life, causing the growing of understanding as an anticipation of any possibilities.
The Analysis of Inter-subjective Communication
The relevance of Quraish Shihab's and Habermas's perspectives lies in their respective descriptions of communication among different people in the effort to live in harmony. Quraish Shihab talks about how different religious people may and should live together, while Habermas talks about communicative action in public sphere among different actors to reach a consensus or agreement to live in harmony.
In his speech, Quraish Shihab said that Muslims should give freedom to others to choose their religious identity. Do not be more enthusiastic than God, he asserted. Each religious person should in a race of virtues, as stated in the Qur'an chapter al-Mā'idah 5:48. Quraish Shihab also reminded the ban on hate speech and attitude to other religions, as in Qur'an chapter al-An'ām 6: 108 and chapter al-Ḥajj 22:40. Cooperating in social life should be submitted first. Shihab said: maybe you are right, maybe I am right. Maybe you are wrong, maybe I am wrong. The best thing to be done is looking for common ground among religious communities. The issue of truth and error in the context of social life in inter-religious relation should not be asked. In this case, Shihab quotes Qur'an chapter Saba' 34: 24-26.
The hermeneutics of Habermas is important in inter-religious communication. Habermas shows that this kind of communication could not be applied in two spheres: monologic language of natural sciences and distorted abnormal text. Abnormal text is not understood even by its own speaker. There are two cases of abnormal text: psychopathology and collective behavior as a result of indoctrination. The second case should get more attention, because it is related to religious doctrine. The important contribution of Habermas hermeneutics is that there could be a special interest in any religious text which may not be realized by its author or speaker. The actors or speakers seem to understand their language and behavior, but actually it is not derived from common sense, but from the effect of ideological indoctrination. It is then called a false consciousness.
The concrete examples of abnormal text in the field of religious indoctrination are the texts by terrorists, suicide bombings actors, or radical fundamentalists. The hermeneutics of Habermas does not assume that the texts as a truth because it is abnormal texts built on ideological indoctrination, not common sense. The hermeneutics of Habermas always suspects that every text definitely have interests. There is a hidden power in it which is not realized by the speaker. What can be learned from Habermas is to not always believe in speech or behavior of each individual. The speech or behavior is maybe an abnormal 'text'. This caution will raise awareness in order to get the truth.
When the actors meet each other in the public sphere, the importance of another framework of Habermas' hermeneutics would be visible: managing the best way in inter-subjective communication. If it is not managed properly, the public sphere may be messy, because each subject may claim his/her own truth and accuse that the truth of others formed due to indoctrination of certain presupposition. The contribution of Habermas' hermeneutics is on the idea of communicative action which paved the way for actors to reach a consensus or agreement. The subjective truth claim of each actor will meet common ground. The debate of rational argumentation will culminate in the most reasonable interpretation accepted by all actors. It is called inter-subjective truth, which is togetherness in agreement, consensus or understanding. This consensus will ensure and guarantee the harmony of social life. On one hand, each subject will aware of his/her uniqueness and hence valid and recognized. On the other hand, when communicating with others, he/she will not force his/her own truth, but communicate with them to get a certain consensus. Is the practice of consensus has been done in daily life? Yes. However, the main contribution of this hermeneutics is that communication should be done among the subjects (intersubjective). Maybe the communication that has been occurred in public sphere is still assuming the relation between subject and object. The result of this kind of communication is victimization. It could be that someone accepts consensus due to forced. He acts as an object entity in the process of achieving consensus.
If the thought of Habermas is drawn into the context of interreligious harmony, each religious person should see a partner from another religion as a subject entity that has sovereignty of thinking. Inter-religious cooperation in society requires an appreciation of egalitarianism, regarding each other as an autonomous concrete entity. It could be that any tensions among religious communities over past years is due to the positivistic communication patterns, assuming the partner as abstract objects. The example of this abstraction is someone assesses another with certain assumptions based only on his/her religious identity. On the other hand, the assumptions are the result of the accumulation of information that has been distorted. In such a way, the tension and suspicion inevitably continue.
When the partner is positioned as a subject entity, the religious identity is only surface information. The whole self of the partner can be unveiled by intense and continuous communication. During the process of communication, the concrete and unique self-reveals. The self which is formed from the complexity of life will be presented. The selection of any religious identity would be appreciated. From this atmosphere of communication, the interreligious harmony will appear. The result is an elegant and sympathetic cooperation will occur among different religious identity.
The most difficult to do is certainly getting consensus in social life. Indoctrination, as mentioned by Habermas, may happen to anyone. The case of terrorism and suicide bombings are merely examples which may actually be embedded in other forms of indoctrination. Therefore, it cannot be ignored other kinds of indoctrination, such as political indoctrination, mass media indoctrination, social media indoctrination, and even educational institutions indoctrination. With so many forms of indoctrination, the demand remains on intersubjective communication in order to achieve better consensus in public life, particularly in establishing inter-religious harmony.
In line with above explanation, Shihab emphasized the importance of finding a common ground among humans, that is by no disputing the truth and fault in the context of social life. The concept of living in harmony and peace is one of Islamic principles teaching. Furthermore, Shihab asserted that anyone should not be more enthusiastic than God. The concept of indoctrination by Habermas is in the same way of what Shihab said.
Conclusion
The paper has shown the relevance in the form of stringing network of meaning expressed by Quraish Shihab as he talked about inter-religious harmony in a television program, Lentera Hati, and written in some of his works, with the hermeneutics concept of Heidegger and Habermas. Therefore, the above description of existentialist philosophical analysis and intersubjective communication analysis confirmed that the religious thought of contemporary Indonesian exegete, M. Quraish Shihab, is relevant to the philosophical thoughts of contemporary philosophers, such as Heidegger and Habermas. However, it is undeniable that the recent issues show the complexity of problems related to inter-religious harmony, especially when it comes to political issues. The elegant consensus, therefore, should be pursued continuously. | 2021-09-01T15:02:38.461Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "1c0e56a16baa8affc0a961752cdcf1b23eb529f6",
"oa_license": "CCBYSA",
"oa_url": "https://ulumuna.or.id/index.php/ujis/article/download/441/329",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ecb36b11eeb62bd9881444e08e8787deff886ee3",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
15708838 | pes2o/s2orc | v3-fos-license | Controlled synthesis of the DSF cell–cell signal is required for biofilm formation and virulence in Xanthomonas campestris
Virulence of the black rot pathogen Xanthomonas campestris pv. campestris (Xcc) is regulated by cell–cell signalling involving the diffusible signal factor DSF. Synthesis and perception of DSF require products of genes within the rpf cluster (for regulation of pathogenicity factors). RpfF directs DSF synthesis whereas RpfC and RpfG are involved in DSF perception. Here we have examined the role of the rpf/DSF system in biofilm formation in minimal medium using confocal laser-scanning microscopy of GFP-labelled bacteria. Wild-type Xcc formed microcolonies that developed into a structured biofilm. In contrast, an rpfF mutant (DSF-minus) and an rpfC mutant (DSF overproducer) formed only unstructured arrangements of bacteria. A gumB mutant, defective in xanthan biosynthesis, was also unable to develop the typical wild-type biofilm. Mixed cultures of gumB and rpfF mutants formed a typical biofilm in vitro. In contrast, in mixed cultures the rpfC mutant prevented the formation of the structured biofilm by the wild-type and did not restore wild-type biofilm phenotypes to gumB or rpfF mutants. These effects on structured biofilm formation were correlated with growth and disease development by Xcc strains in Nicotiana benthamiana leaves. These findings suggest that DSF signalling is finely balanced during both biofilm formation and virulence.
Introduction
Xanthomonas campestris pv. campestris (Xcc) is the causal agent of black rot disease which affects cruciferous crops worldwide (Onsando, 1992). As with many phytopathogenic bacteria, Xcc produces a range of factors that contribute to the ability of the bacterium to parasitize the host . Among these are extracellular enzymes capable of degrading plant cell components and an extracellular polysaccharide (EPS) called xanthan. These factors may play a number of roles during disease. Extracellular enzymes may be required to overcome plant defence responses, to allow bacteria to move into uncolonized plant tissues and to mobilize plant polymers for nutritional purposes. Xanthan induces susceptibility to Xcc in Nicotiana benthamiana and Arabidopsis thaliana by suppressing basal defences such as callose deposition (Yun et al., 2006), has a role in biofilm formation (Dow et al., 2003) and may have further roles in protecting bacteria from stresses of desiccation and hostelaborated defences.
In Xcc the production of extracellular enzymes and EPS is subject to coordinate positive regulation by a cluster of genes, the rpf cluster (for regulation of pathogenicity factors) (Tang et al., 1991;Dow and Daniels, 1994;Barber et al., 1997;Slater et al., 2000). Mutations in rpf genes lead to a reduced virulence in host plants. Several of the rpf genes mediate regulation via a small diffusible molecule named DSF (for diffusible signal factor) (Barber et al., 1997). DSF has recently been structurally characterized as cis-11-methyl-2-dodecenoic acid (Wang et al., 2004). The synthesis of DSF is directed by RpfB and RpfF and DSF perception and signal transduction is mediated by the RpfC/RpfG two-component system, which is encoded by the rpfGHC operon (Slater et al., 2000). Mutation of rpfC, which encodes the sensor component, leads to overproduction of DSF and to lower levels of EPS and extracellular enzymes (Tang et al., 1991;Slater et al., 2000). The addition of DSF can phenotypically restore rpfF but not rpfC mutants to wild type for production of extracellular enzymes and EPS (Barber et al., 1997). Regulation of EPS biosynthesis by DSF occurs at least in part at the level of transcription of the gum operon, which encodes the sugar transferases required for EPS biosynthesis (Vojnov et al., 1998;. There is circumstantial evidence for the operation of the DSF regulatory system in planta (Vojnov et al., 2001).
More recent work has implicated the DSF signalling system in the regulation of biofilm dispersal in Xcc (Dow et al., 2003). When grown in a rich medium containing glucose, rpfF, rpfG, rpfC and rpfGHC mutants form aggregates in which the bacteria are held together in a polymeric matrix. The integrity of this matrix is dependent on the synthesis of xanthan (Dow et al., 2003). Aggregates formed by rpfF mutants disperse upon addition of DSF, but those formed by other rpf mutants do not. As the wild type grows in a dispersed planktonic form under these conditions, it was concluded that the role of DSF was in induction of biofilm dispersal in Xcc.
Growth conditions are known to influence many aspects of bacterial behaviour including the formation of biofilms. The work in this article was prompted by the observation that when grown in minimal medium (conditions that may more closely mimic those found in planta), the wild-type Xcc formed a structured biofilm on glass slides. Here we have examined the role of DSF signalling in biofilm formation in minimal medium with confocal laser-scanning microscopy (CLSM) of GFPtagged Xcc strains, used both singly and in pairwise combinations. We provide evidence that biofilm formation requires tight control of the level of the DSF, with both DSF overproduction and non-production adversely affecting the formation of a structured biofilm. Furthermore, examination of the effects of inoculation of combinations of strains in the model plant N. benthamiana provided evidence for the action of DSF within plants and suggested that balanced DSF levels were required for optimal virulence.
Results
The development of a structured Xcc biofilm is under the control of the rpf/DSF system The ability of Xcc to form biofilms was examined in the minimal Y medium (see Experimental procedures). In preliminary experiments, bacterial adhesion to polystyrene 96-well plates was analysed by crystal violet staining. Interestingly, the rpfF, rpfC and xanthan-deficient gumB mutants showed significantly less adherence than the wild-type strain 8004 after 6 h of growth in Y-minimal medium (Fig. 1). This is in contrast to effects seen in rich L medium, where levels of attachment of rpfG, rpfGHC and gum mutants were higher than the wild type (Crossman and Dow, 2004).
The characteristics of the Xcc biofilm formed in vitro in minimal medium were analysed by CLSM over a 4-day time-course experiment of static cultures in chambered cover slides (Russo et al., 2006). In the formation of a typical Xcc biofilm, the bacteria contacted the glass surface via the lateral cell surface and also predominantly attached to each other through lateral interactions forming microcolonies (Fig. 2, day 2). This phase was followed by the formation of compact aggregates of bacteria with a characteristic three-dimensional structure separated by extensive water spaces (Fig. 2, day 4). A z-projection of the x-y stacks (optical sections) showed a mushroomtype biofilm structures (Fig. 2). Bacteria in these structures were mostly interacting laterally (Fig. 2, day 4).
With the rpfF mutant (DSF-minus), microcolonies were seen after 2 days, but these did not develop into a structured biofilm, so that after 4 days only unstructured layers of bacteria were observed. With the rpfC mutant (DSF overproducer), although the bacteria showed some aggregation at day 2, only unstructured layers of bacteria were observed at day 4 ( Fig. 2). The relative levels of DSF of the different strains, which were predicted from behaviour in other media, were confirmed experimentally for growth in minimal medium in static culture (see Fig. S1).
Overall these results showed that DSF-mediated signalling is required for the formation of a structured biofilm in minimal medium.
Role of xanthan in the formation of the structured biofilm in minimal medium
To evaluate the importance of xanthan in the formation of the structured biofilm, the behaviour of the gumB mutant grown in Y medium was analysed by CLSM. We observed that the gumB mutant (strain 8397) was severely affected in the microcolony formation and did not form more complex structures (Fig. 2). After 4 days, no evident biofilm architecture was observed on the base of the chamber (Fig. 2, z-stage). The gum cluster of genes cloned in pIZD261-15 restored normal levels of EPS and a typical structured biofilm to gumB strain 8397 (not shown). These observations confirmed that xanthan synthesis in Xcc is crucial for the development of the struc-tured biofilm in Y medium. Measurement of the relative levels of xanthan in the static cultures showed that after 2 days of growth, both rpfF and rpfC mutants had significantly lower levels than the wild type (see Fig. S2), consistent with previous observations of the role of the rpf/ DSF system in regulating synthesis of this polysaccharide (Tang et al., 1991;Slater et al., 2000;Vojnov et al., 2001).
Extracellular complementation of biofilm-defective phenotypes in mixed bacterial cultures
Our CLSM analysis indicated that mutations in rpfF and gumB genes resulted in the absence of a typical and structured biofilm. To confirm that this phenotype is due to the contribution of extracellular DSF and secreted xanthan, mixed cultures of gumB and rpfF strains, both GFP-labelled, were analysed over a time-course experiment by CLSM. The mixed culture of the gumB and the rpfF mutants was able to induce cell clustering and to develop a structured biofilm (Fig. 3D). These results suggested that reciprocal complementation had taken place, where the lack of DSF in the rpfF mutant had been restored by DSF produced by the gumB mutant and the xanthan produced To investigate whether regulated production of DSF was needed for biofilm development, the wild-type strain 8004 was co-inoculated with the rpfC mutant (Fig. 3). At a 1:1 ratio the ability of 8004 to form the wild-type structure was abolished (Fig. 3B). At a 1:4 ratio of rpfC mutant to wild-type strain, only slightly modifications of the biofilm were observed (not shown). Similar results were obtained in mixed cultures when both rpfC and wild-type strains were GFP-labelled or when only the wild-type strain was GFP labelled (not shown). This eliminated the possibility that the rpfC mutant simply out-competes the wild-type strain for attachment, thereby always giving an rpfC-like pattern. Co-inoculation of the rpfC mutant with the rpfF mutant strain 8523 (DSF-defective) in a 1:1 ratio did not restore the formation of the typical wild-type structure (Fig. 3C). In contrast to the effects caused by the rpfC mutant, the rpfF mutant did not alter the wild-type 8004 biofilm when the two strains were mixed (Fig. 3A).
Taken together with the results of reciprocal complementation of rpfF and gumB mutants, these findings suggest that the amount of DSF produced has to be tightly controlled for the development of the biofilm and increased levels of DSF interfere with this process. These conclusions were supported by the results of experiments in which exogenous DSF was added to cultures. Addition A-E. Biofilms formed after 4 days by mixed cultures are shown for the following. Wild type-gfp + rpfFgfp (A), wild type-gfp + rpfCgfp (B), rpfFgfp + rpfCgfp (C) and rpfFgfp + gumBgfp (D). All the inoculations were at a 1:1 ratio of the two strains. (E) shows dual-colour confocal images acquired from mixed cultures of rpfF and gumB carrying the GFP-expressing plasmid (pRU1319) or the EYFP-expressing plasmid (pMP4518) respectively. F and G. Biofilms formed after 4 days by the rpfFgfp strain supplemented with DSF extracted from 4-day static culture of the wild type (F) or rpfF mutant (G). Scale bars = 5 mm.
of DSF extracted from a 4-day culture of the wild type to a culture of the rpfF/GFP mutant allowed the production of a structured biofilm at 4 days (Fig. 3F). Extracts from cultures of the rpF mutant (DSF-minus) by contrast had no effect on biofilm structure (Fig. 3G). Furthermore, addition of DSF extracted from the rpfC mutant (DSF overproducer) to cultures of the wild-type strain 8004 culture inhibited structured biofilm development (not shown).
Phenotypic characterization of in planta behaviour of rpf mutants and in vivo complementation studies
On the basis of the above findings, we aimed to investigate the possible biological implication of altering DSF levels on the interaction between Xcc and plants. Our model pathosystem was the interaction of Xcc with N. benthamiana (Yun et al., 2006). Nicotiana benthamiana has become a useful model plant, primarily because it shows an unusual susceptibility to a variety of pathogens. In initial experiments, we analysed the virulence phenotype of the rpf mutants, the gumB mutant and wild-type strain 8004 in N. benthamiana (Fig. 4). Each strain was inoculated in a leaf of N. benthamiana and symptoms and bacterial growth were monitored (see Experimental procedures). In contrast to the wild type, all mutant strains produced almost no symptoms in N. benthamiana (Fig. 4A-D). With the wild-type strain the number of colony-forming units (cfu) recovered from leaf disks cut from the inoculated area increased more than three orders of magnitude over 4 days of infection (Fig. 4E). In contrast, both the rpfF and rpfC strains showed significantly less growth after the same infection period (Fig. 4F and G). As previously reported (Yun et al., 2006), the xanthan-defective gumB mutant was completely asymptomatic on the N. benthamiana leaves (Fig. 4D) and was severely compromised in growth in the plant tissue (Fig. 4H).
To examine the role of DSF cell-cell signalling during Xcc pathogenesis, co-inoculations of N. benthamiana with mixed cultures of different mutants were performed. Although leaves of N. benthamiana inoculated with gumB and rpfF separately showed no symptoms associated with limited bacterial growth (Fig. 4), when the two strains were inoculated together a different outcome was seen. In this case, the growth of both the gumB and the rpfF in N. benthamiana increased significantly (Fig. 4G) and symptoms induced by the mixed culture were similar to those induced by the wild type (Fig. 5C). These observations strongly suggested that reciprocal complementation was occurring. That is, DSF produced by the gumB mutant was complementing the DSF deficiency of rpfF and that xanthan secreted by the rpfF strain complemented the defect in the gumB strain. From these findings we inferred that DSF cell-cell signalling occurred in planta.
To examine whether regulated production of DSF was required for virulence, N. benthamiana leaves were inoculated with the wild-type 8004 and the rpfC mutant in a 1:1 ratio. The rpfC mutant interfered with symptoms caused by the wild-type 8004 and the total bacterial population was two orders of magnitude lower than that of the wild-type strain ( Fig. 5B and F). In contrast, the presence of the DSF-defective rpfF mutant did not modify the symptoms and growth of wild-type strain 8004 in N. benthamiana leaves ( Fig. 5A and E). Furthermore, co-inoculation of the rpfC and rpfF mutants allowed very limited bacterial growth and no symptoms were observed ( Fig. 5D and H).
Days after inoculations
One possible interpretation of experiments with the rpfC mutant is that the elevated levels of DSF trigger plant defence responses that are responsible for the restriction of bacterial growth and symptom production. To address this point we examined the effects of inoculation of leaves of N. benthamiana with a DSF preparation on expression of a number of defence-related responses. DSF did not induce expression of the defence-related PR1 gene or callose synthesis (data not shown), suggesting that the adverse effects of elevated levels of DSF on virulence are through direct effects on the co-inoculated wild-type bacteria.
Cell-cell signalling and biofilm formation in different environments
The work in this article offers further insight into the role of the rpf/DSF signalling system in the biology of Xcc. By examination of bacterial behaviour in static cultures in minimal medium we have demonstrated that DSF signalling has a role in the formation of structured biofilms and that an excess of DSF prevents such biofilm formation. This is a substantially different picture from that obtained from the study of aggregation/biofilm formation in shaken rich nutrient medium (Dow et al., 2003). In this latter case, rpf mutants form matrix-enclosed aggregates whereas the wild-type strain does not. Furthermore, DSF causes aggregate dispersal in rpfF but not other rpf mutants. These experiments in rich nutrient medium suggested an effect of DSF on biofilm dispersal requiring the RpfC/RpfG two-component system, but no influence on biofilm formation.
Work on other bacteria has established that the environment has an impact on the contribution of cell-cell signalling or quorum sensing to the development of bacterial biofilms and that quorum sensing may be integral to biofilm formation only under certain conditions (Kjelleberg and Molin, 2002;Kirisits and Parsek, 2006). The same considerations appear to apply to the role of DSF signalling in biofilm formation in Xcc. In Xcc the perception of DSF is linked to the degradation of the intracellular signalling molecule cyclic di-GMP by the response regulator RpfG, an HD-GYP domain protein (Ryan et al., 2006). Cellular levels of cyclic di-GMP are controlled through synthesis, catalysed by the GGDEF protein domain, and degradation by EAL and HD-GYP domains. The genome of Xcc encodes 37 proteins with potential roles in cyclic di-GMP turnover. Many of these proteins contain additional signal transduction and sensory domains suggesting that their activities in cyclic di-GMP turnover are responsive to environmental cues (Ryan et al., 2007). Xcc may thus integrate information from a number of environmental inputs, including cell-cell signalling, to modulate cellular cyclic di-GMP levels with consequent effects for biofilm formation and virulence factor synthesis (Ryan et al., 2007). Consequently cell-cell signalling may not have a primary role in biofilm formation under all growth conditions. A further consideration is that the synthesis of xanthan, which is required for biofilm formation, is considerably enhanced in rich medium in the presence of glucose, so that mutation of rpf genes may reduce xanthan production below a critical level for biofilm formation only in minimal medium.
Cell-cell signalling and virulence in Xcc
The results of our experiments with mixed inoculations of bacteria indicate that DSF cell-cell signalling occurs in planta, but that an optimal concentration of DSF is required for virulence. A model of the production of DSF during the growth of Xcc in planta and its relationship to the production of virulence factors and the development of structured biofilms is shown in Fig. 6. Although a close correlation was observed between the effects of DSF levels on structured biofilm formation in minimal medium and on virulence, we cannot conclude a direct cause and effect relationship. Although there is no evidence to suggest that addition of excess DSF negatively influences the synthesis of extracellular enzymes or xanthan, we cannot exclude effects on the synthesis of other virulence determinants. Our findings are similar to those of Lindow and colleagues (2006) who have reported the effects of interference in rpf signalling in the virulence of Xylella fastidiosa the causal agent of Pierce's disease of grape. Xylella fastidiosa is closely related to Xcc and synthesizes a DSF-like signal molecule that is recognized by the Xcc rpf system but which is probably slightly different from DSF. Inoculation of bacteria able to degrade DSF or bacteria able to synthesize DSF (including an rpfC mutant of X. fastidiosa) can reduce virulence and symptom production by X. fastidiosa in grape. Lindow and colleagues concluded that DSF signalling was normally finely balanced during the disease process and that such a fine balance might therefore be readily disrupted. This may have substantial consequences for development of measures for the control of Pierce's disease. The role of rpf/DSF signalling in disease is somewhat different for Xylella and Xanthomonas. In particular, mutation of rpfF leads to enhancement of virulence in X. fastidiosa but reduced virulence in Xcc. Nevertheless, our findings suggest that interference with DSF signalling may also have a role in the control of diseases caused by Xcc and perhaps other Xanthomonas spp.
Microbiological techniques
Xanthomonas campestris pv. campestris strains 8004 (wild type), 8523 (rpfF::Tn5lac) and 8557 (rpfC::pUIRM504) are previously described (Daniels et al., 1984;Tang et al., 1991;Slater et al., 2000) and were grown at 28°C in PYM medium (Cadmus et al., 1976) or in Y minimal medium containing glucose (1%, w/v) as the carbon source (Sherwood, 1970). where the production of DSF (diamonds) is limited, bacteria attach to the surfaces of the xylem vessels (XV). As the microcolony forms (B), DSF levels rise and the bacteria start to produce virulence factors (VF) including extracellular enzymes. Extracellular enzymes can promote disease through interference with plant defences (PD), provide nutrition through degradation of the xylem walls and allow passage of bacteria between xylem elements through degradation of bordered pit membranes (PM). In addition, the structured biofilm begins to form; bacteria within these structures may have increased resistance to host defences. At later stages (C), further elevation of DSF levels promotes biofilm dispersal, so that the bacteria can be released to colonize new tissue. The presence of elevated levels of DSF at early phases prevents the formation of the structured biofilm.
Escherichia coli was grown at 37°C in L medium (Sambrook et al., 1989). Bacterial growth was monitored at 600 nm using MSE Spectroplus spectrophotometer. Plasmids were mobilized into Xanthomonas by triparental mating using a helper plasmid. For analysis of biofilm growth, bacteria were grown in PYM medium for 1 day [optical density at 600 nm (OD600), about 1.5], and then the culture was used as an inoculum at a 1:1000 dilution in Y medium. Biofilm growth on glass was monitored in static cultures by confocal microscopy (see below). In some experiments, bacterial attachment to the side and at the bottom of the glass tubes and the wells of polystyrene plates was assayed by first growing the bacteria in shake flasks with Y medium to an OD600 of 0.8-1.0 and then pipetting 5 ml of this culture into 10 ml glass tubes or 2 ml into the wells of polystyrene 24-well flat-bottom tissue culture plates (Corning Incorporated, Corning, NY), which were then allowed to stand at 28°C for 48 h. Unbound bacteria were removed by gently washing the tubes or the wells three times with fresh growth medium, and attached bacteria were quantified by staining them with 0.01% (w/v) crystal violet (Acros Organics, Geel, Belgium), as described previously (O'Toole et al., 1999).
Extraction and quantification of DSF from static cultures
DSF was extracted from culture supernatants using ethyl acetate as previously reported (Barber et al., 1997). DSF was estimated by a bioassay in which restoration of endoglucanase production to an rpfF mutant is assessed (Barber et al., 1997) Endoglucanase activity was measured by a radial diffusion assay and units were established by using a cellulase I enzyme (Sigma) as standard.
Nicotiana benthamiana growth conditions and inoculations
Nicotiana benthamiana seed germination and growth in soil were performed as previously reported (Yun et al., 2006). All plant inoculations involved a minimum of three leaves from each of the three plants, and each experiment was carried out at least three times. Inoculation was performed according to published methods (Newman et al., 1994). Bacteria were hand infiltrated into plant leaves at the abaxial surface by using a 1 ml syringe without needle, with Xcc strains (10 7 cfu ml -1 in H2O) or H2O, and bacterial development was assessed as reported (Yun et al., 2006).
Confocal laser-scanning microscopy (CLSM)
A confocal laser-scanning microscope (Carl Zeiss LSM510-Axiovert 100 M) was used to visualize the different events of biofilm formation in a 4/5-day time-course experiment using chambered cover glass slides containing a borosilicate glass base 1 mm thick (Laboratory-Tek Nunc; No. 155411) and GFP-labelled bacteria as described previously (Russo et al., 2006). Confocal images were acquired from bacterial cultures carrying the plasmid pRU1319, which expresses the green fluorescent protein (GFPuv) (Allaway et al., 2001) or the plasmid pMP4518 expressing the enhanced yellow fluo-rescent protein (EYFP) (Stuurman et al., 2000). GFP-or EYFP-labelled bacterial cultures were diluted 1:1000 and grown in the chambers for up to at least 10 days at 28°C. Such static cultures typically reached an OD600 of about 1.7, as determined by re-suspending the biofilm bacteria and measuring their optical density. To prevent desiccation, the chambers were incubated in a humid sterile Petri dish. A typical mature biofilm was developed by the 8004 wild-type strain in static cultures in Y minimal medium containing glucose after 4/5 days at 28°C, when the OD600 was about 1.1. Three-dimensional images were reconstructed using the Zeiss LSM Image Browser version 3.2.0. Dual-colour confocal images were acquired from mixed cultures carrying the GFP-expressing plasmid (pRU1319) or the EYFP-expressing plasmid (pMP4518) (Stuurman et al., 2000). GFP-expressing bacteria appeared green, and EYFP-expressing bacteria appeared in yellow. The detection of the emitted light was performed as described previously (Stuurman et al., 2000). Dual-colour images were acquired by sequentially scanning with settings optimal for GFP (488 nm excitation with argon laser line and 505-nm-long pass emission) or EYFP (488 nm excitation with argon laser line and detection of emitted light between 530 and 600 nm). Rates of biofilm formation by bacteria expressing both constructs were similar, and no difference in growth or biofilm formation could be detected using (non-fluorescence) microscopy of biofilms formed by bacteria containing or lacking the GFP or EYFP construct. | 2016-05-04T20:20:58.661Z | 2007-08-01T00:00:00.000 | {
"year": 2007,
"sha1": "57f65308144f31104d71005df00b0d4806ae953b",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc1974818?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f659298634ca1e2eba9313bf7e21dc0aaee4a33e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3861015 | pes2o/s2orc | v3-fos-license | Risk factors for progression of radiographic knee osteoarthritis in elderly community residents in Korea
Background Knee osteoarthritis (OA) is the most common form of arthritis affecting the elderly. Understanding the risk factors for knee OA has been derived from cross sectional studies. There have been few longitudinal studies of risk factors for knee OA among Asian populations. The purpose of this study was to evaluate the risk factors for knee OA in elderly Korean community residents. Methods This prospective, population-based study was conducted on residents over 50 years of age in Chuncheon who participated in the Hallym Aging Study. Standardized weight-bearing semi-flexed knee anteroposterior radiographs were obtained in 2007 and in 2010. Of 504 participants at baseline, 322 participants (male: female = 150:172) underwent follow-up knee radiographs. Radiographic knee OA was defined as Kellgren/Lawrence (K-L) grade of ≥ 2. Risk factors assessed at baseline were tested for their association with incidence, progression, and worsening of radiographic knee OA by logistic regression analysis. Results The median age of these participants at follow-up was 71 years (interquartile range 66–75 years). Incident OA was observed in 33 (10.2%) and progression of OA (defined as an increase of Kellgren-Lawrence (K-L) grade at follow-up, from grades 2 or 3 at baseline) in 43 (13.55%) participants. In multivariate logistic regression analysis, only females were significantly associated with the progression of radiographic knee OA (odds ratio [OR] = 4.41, 95% confidence interval [CI] 1.32–14.77). Conclusions In this 3-year longitudinal study, the yearly incidence and progression of knee OA was higher than those previously reported in Western populations.
Background
Knee osteoarthritis (OA) is the most common form of arthritis affecting the elderly and is a growing public health concern as the population ages. In the US, in 2004, approximately 431,485 primary knee replacements were performed [1]. This was a 53% increase in primary knee replacements, compared with data from 2000. From 2002 to 2005, 103,601 total knee replacement (TKR) surgeries were performed in South Korea, and approximately 83% of these were associated with knee OA [2]. The rate of TKR increased over the 4 years of the study and was much higher in women than in men. In rapidly aging societies such as in Korea, the increasing prevalence of knee OA may present serious new health issues. Previous studies have reported various risk factors associated with knee OA such as older age, female sex, hypertension, raised glucose, obesity, history of knee injury, varus/valgus malalignment, quadriceps muscle strength, and physical workload [3][4][5][6][7][8][9][10][11][12]. However most of these studies for risk factors of knee OA have been performed in persons of European origin, so the results cannot be extrapolated to Asian populations. There have only been a few longitudinal studies of risk factors for knee OA among Asian peoples [13,14]. We have previously examined the prevalence of radiographic knee OA (ROA) and symptomatic OA in a 2007 cross-sectional study, using the standardized radiographic protocol, and the prevalence was 37.3% and 24.2%, respectively. The presence of hypertension, having a manual occupation and a lower level of education were significantly associated with the presence of ROA [15]. However, cross-sectional studies can neither show how risk factors affect the progression of knee OA, nor define the cause and effect relationship. Therefore, longitudinal studies are needed to clarify the risk factors for the incidence or the progression of knee OA. The objective of the present study was to assess the incidence, progression, and worsening of radiographic knee OA in elderly Korean community residents during a 3-year follow-up period and, furthermore, to evaluate the prospective risk factors for knee OA.
Participants
The participants in this study were recruited in the Hallym Aging Study (HAS), which commenced in 2004 and involved follow-up examinations at 3-year intervals. The HAS is a prospective cohort of residents aged 50 years or older (70% older than 65 years) in Chuncheon, a city in the northeast area of South Korea. Details of the cohort profile were reported elsewhere [15] and are only briefly described here. The city was divided into 1408 areas based on the Korean National Census conducted in 2000, and 200 areas were randomly selected [16]. Nine hundred eighteen of the 1489 participants completed face to-face interviews at baseline in 2004. Of the 918 participants, 702 participated in the 2007 survey, excluding 216 of them who died, moved, refused participation, or could not be contacted. Among the 702 participants, 504 who underwent knee radiography participated in the 2007 OA study cohort. After 3 years, 182 patients were lost to follow-up and 322 completed the survey, including radiographs, and constituted our present 2010 study cohort. The Hallym University's institutional review board approved the study protocol, and informed consent was obtained from all the study participants.
Data collection
Demographic information, such as educational level, marital status, income, occupation, regular exercise, and comorbidities was collected through face-to-face interviews by trained interviewers. Educational levels were classified as < 10 or ≥10 years. Income was divided into 11 categories, and low income was defined as < 500,000 Korean Won (1000 Korean won is approximately 1.00 US dollars) per month. Occupations were categorized as follows: none, mostly sedentary work, work demanding some walking, work demanding physical exertion, and work demanding heavy physical exertion. Manual work was defined as work demanding physical or heavy physical exertion. Exercise status was self-reported, and answers were classified as < 3 times/week or ≥3 times/ week. Smoking was defined as more than 20 packs of cigarettes smoked during the participants' lifetime. Alcohol consumption was defined as the drinking of any alcoholic beverage more than once per month. Comorbidity health information was also self-reported and recorded using 29 predefined diagnostic categories, which included hypertension, diabetes mellitus, arthritis, stroke, and osteoporosis. Body mass indexes (BMIs) were calculated as the body weight divided by the height squared (kg/m 2 ).
Radiographic assessment
All the participants underwent radiographic examination of both knees in a weight-bearing anteroposterior view with a semi-flexed knee position. A Plexiglas frame (SYNARC, San Francisco, CA, USA) was used to standardize the knee positions. Details of the study protocol were described elsewhere [15]. Knee OA severity was classified as grade 0-4 according to the Kellgren/ Lawrence (K-L) grading system. Radiographic OA was defined as a K-L grade of ≥ 2, and severe radiographic OA was defined as a K-L grade of 3 or 4. Radiographs were read twice by one reader, an academically-based rheumatologist of 17 years of experience (HAK). The reproducibility of the intra-reader assessments was high (for OA vs. no OA, κ = 0.89). Films that allocated different K-L grades at the two readings were adjudicated through consensus between the original reader and a second reader (David Hunter at the University of Sydney).
Statistical analysis
The participants were divided into 4 age groups, namely, 50-59 (29 participants), 60-69 (88 participants), 70-79 (178 participants), and 80-89 years (27 participants). Due to the inherent limitations of complete case analysis, a post hoc available-case analysis was performed, when possible, to check for dropout bias. The agespecific prevalence of 3-year incidence, progression, and worsening of radiographic knee OA were calculated. The incidence of radiographic knee OA was defined as having a K-L grade of 0 or 1 at baseline and a grade of ≥ 2 (radiographic OA) at follow-up. Progression was defined as an increase of the K-L grade at follow-up from grades 2 or 3 at baseline. Worsening was defined as an increase in the K-L grade at follow-up from any other grade (including grades 0 and 1). The group with worsening knee OA essentially included incident cases. The annual cumulative incidence, progression, and worsening were calculated by dividing them with the number of years under observation. To compare participants with/ without OA, continuous variables were tested using the Mann-Whitney U test, and categorical variables were tested using Fisher's exact test. Crude odds ratios (OR) for risk factors for incidence, progression, and worsening of radiographic knee OA were calculated using 95% confidence intervals (CI). Adjusted ORs were calculated using logistic regression analysis after adjusting for the factors significantly associated with incidence, progression, and worsening of knee OA in univariate analysis. Data were analyzed using SPSS version 15. Data are presented as median and interquartile ranges (IQR) or as percentages. P values < 0.05 (2-tailed) were considered statistically significant.
Characteristics of the study participants
Of the 504 participants who underwent knee radiographs in the 2007 survey, 322 completed the survey, including radiographs, and constituted our 2010 study cohort. There was no significant difference in age and sex between the complete follow-up group and the group lost to follow-up ( Table 1). The median participant age was 71.0 years, and 53.4% were women in the complete follow-up group. Fifty-eight participants (18%) had moderate to severe OA, defined as a K-L grade of ≥ 3. The characteristics of the 504 participants at baseline in this study are shown in Table 1. The median age of subjects with knee OA was higher than those without knee OA (72.64: 68.62 years) ( Table 2).
Participants who were not obese (BMI < 25 kg/m 2 ) were more likely to have no knee OA (67.4%). The characteristics of the subjects with/without knee OA at baseline are shown in Table 2.
Prevalence of incidence, progression and worsening of radiographic knee OA The worsening of radiographic knee OA was observed in 126 (39.1%, M: F = 29.3%: 47.7%). The rates of incidence, progression, and worsening were the highest in the 70-79 age groups (6.2%, 8.39%, 23.6%, respectively), and leveled off afterwards. Women tended to have higher rates of progression and worsening in all age groups. The prevalence of incidence, progression, and worsening of radiographic knee OA in respect of age and sex are summarized in Figs. 1, 2, and 3.
Longitudinal risk factors for radiographic knee OA
We analyzed the data to determine risk factors for the progression of radiographic knee OA (Table 3). In the univariate analysis, sex, smoking, alcohol consumption, manual occupation, marriage, education level and osteoporosis were significantly associated with the progression of radiographic knee OA. However, in the multivariate logistic regression analysis, only women were significantly associated with the progression of radiographic knee OA (OR = 4.41, 95% CI 1.32-14.77). We next performed an analysis to determine the risk factors for worsening of radiographic knee OA (Table 3). Being female (OR = 1.41, 95% CI 1.02-1.95), and having a lower level of education (OR = 0.52, 95% CI 0.35-0.77) were significantly associated with a worsening of radiographic knee OA in the univariate analysis. In the multivariate logistic regression analysis, only a lower level of education was significantly associated with worsening of radiographic knee OA (OR = 0.56, 95% CI 0.37-0.86). In the incidence analysis of radiographic knee OA, we could not find any correlating risk factor.
Discussion
In this prospective 3-year follow-up study of 504 Chuncheon city residents aged 50 years and older, 322 participants (male: female = 150: 172) underwent a 3- Except where indicated otherwise, values are written as percentages. Levels of education were classified as < 10 years or ≥ 10 years. Income was divided into 11 categories and low income was defined as < 500,000 Korean won per month. Exercise status was self-reported and responses were classified as < 3 times/week or ≥ 3 times/week. Smoking was defined as more than 20 packs of cigarettes having ever been smoked during the participants' lifetime. Alcohol consumption was defined as the drinking of any alcoholic beverage more than once per month. Manual work was defined as work demanding physical or heavy physical exertion. Co-morbidity health information was also selfreported, and was recorded using 29 pre-defined diagnostic categories. Diabetes mellitus was defined as either a fasting glucose level ≥ 126 mg/dL or a 2-h glucose level of ≥200 mg/dL after 75-g oral glucose loading, or treatment for previously diagnosed diabetes mellitus year follow-up knee radiograph. Incidence, progression, and worsening of knee OA were observed in a significant number of participants at the 3-year follow-up. In the multivariate logistic regression analysis, only women were significantly associated with the progression of radiographic knee OA and a lower level of education was significantly associated with the worsening of radiographic knee OA. A limited number of population-based studies have examined the incidence or progression of radiographic knee OA [8,13,14,17,18] and only two have reported on Asian populations [13,14]. In the US Framingham Except where indicated otherwise, values are written as percentages. Levels of education were classified as < 10 years or ≥10 years. Income was divided into 11 categories and low income was defined as < 500,000 Korean won per month. Exercise status was self-reported and responses were classified as < 3 times/week or ≥3 times/week. Smoking was defined as more than 20 packs of cigarettes having ever been smoked during the participants' lifetime. Alcohol consumption was defined as the drinking of any alcoholic beverage more than once per month. Manual work was defined as work demanding physical or heavy physical exertion. Co-morbidity health information was also self-reported, and was recorded using 29 pre-defined diagnostic categories. Diabetes mellitus was defined as either a fasting glucose level ≥ 126 mg/dL or a 2-h glucose level of ≥200 mg/dL after 75-g oral glucose loading, or treatment for previously diagnosed diabetes mellitus Fig. 1 Prevalence of incidence of radiographic knee OA, according to age and sex. Incidence of radiographic knee OA was defined as having a K-L grade of 0 or 1 at baseline and a grade of ≥ 2 at follow-up Fig. 2 Prevalence of the progression of radiographic knee OA, according to age and sex. Progression of radiographic knee OA was defined as an increase of the K-L grade at follow-up, from grades of 2 and 3 at baseline Osteoarthritis Study which involved follow-up after a mean 8.1-year interval, the progression of radiographic knee OA, defined as having a K-L grade of ≥ 2 at baseline and showing an increase of at least one K-L grade at follow-up, was 24.2% and 31.8% (3.0% and 3.9% per year) in men and women, respectively [17]. In the Chingford Women's Study, a UK community-based cohort were followed-up for more than 14 years, and the annual rates of disease progression and worsening were 2.8% and 3.0%, respectively [18]. In the present study, the annual rate of knee OA progression, and worsening was 7.36%, and 15.9% in women, respectively, which is much higher than that of previous studies in the US and UK [8,17,18], implying that progression, and worsening of knee OA is higher among Korean women than in those of European origin. In the Japanese population-based 3-year follow-up ROAD study, the progression rate of knee OA was 6.3% per year in women [14]. The higher progression rate of radiographic knee OA in Korean and Japanese women might be due to lifestyle factors, such as sitting with legs crossed, sitting with knees and feet together on the floor, or genetic factors. In the Framingham Osteoarthritis Study, the incidence of radiographic knee OA was 1.4% and 2.2% per year, in men and women, respectively [17]. In the Chingford Women's Study, the incidence was 2.3% per year in women [18]. In the ROAD study, the incidence was 2.0% and 3.7% per year, in Japanese men and women, respectively [14]. In the present study, we also examined the incidence of knee OA, and found that the incidence rate of knee OA was 3.1% and 3.7% per year, in Korean men and women, respectively, which was also higher than that of other previous epidemiologic studies in the US, and the UK [17,18]. We could not find any risk factors for the incidence of knee OA, which may be attributable to the rather small sample size of the present study.
In this study, only women were significantly associated with the progression of radiographic knee OA after adjustment for covariates including age, BMI, education, income, exercise, smoking, alcohol consumption, manual occupation, marriage, baseline K-L grade, DM, and osteoporosis. Being female has previously been reported as a risk factor for knee OA [6,13,14]. Only a low education level was significantly associated with the worsening of radiographic knee OA while being female was significant only in the univariate analysis. The level of education correlates with sex in this study cohort, which suggests that multicollinearity would have been the cause of this discrepancy. Although smoking was negatively associated with the progression of OA in the univariate analysis, it is intuitively improbable that it actually protects against OA progression. In addition, it was strongly correlated with sex; therefore, we excluded smoking in the multivariate analysis. A lower level of education, which was significantly associated with the worsening of radiographic knee OA, has been associated with the increased prevalence, morbidity and mortality of many chronic diseases. Several previous studies have examined the relationship between formal education levels, and hip and knee OA, showing results consistent with our study [19][20][21]. In the National Health and Nutrition Examination Survey of the USA, adjustment for age, knee injury, ethnicity, obesity, occupation, and low educational attainment were associated with a high prevalence of knee OA in both men and women, [19]. After adjustment for known risk factors, educational attainment, as an indicator of socioeconomic status, is associated with symptomatic knee OA in both men and women and with radiographic knee OA in US women [20]. In a USA study of African-American and European-American men and women aged ≥ 45 years, pain and disability were significantly associated with low educational attainment in radiographic and symptomatic hip OA, after adjusting for covariates included age, sex, ethnicity, BMI, and the presence of knee symptoms [21].
Our study had strengths and limitations. To the best of our knowledge, the present study is the first longitudinal study to evaluate the progression, incidence, and risk factors of radiographic knee OA, using standardized radiographs and a recognized grading system in Korea. However, despite its prospective design, which is rare in Asian population studies, 3 years is a rather short time to evaluate the progression of OA. Our study contains a relatively small sample size, and the previously known risk factors of knee OA may not be statistically significant. The study area included only Chuncheon, a city in South Korea, reducing the representativeness of the study sample. Fig. 3 Prevalence of radiographic knee OA worsening, according to age and sex. Worsening of radiographic knee OA was defined as an increase of the K-L grade at follow-up, from any baseline grade Most of the women were non-smokers (male: female = 27.3%: 93.6%). Smoking was removed from the multivariate analysis because of the multicollinearity problem with sex OR odds ratio, 95% CI 95% confidence interval, BMI body mass index, K-L Kellgren-Lawrence | 2018-03-14T21:39:42.826Z | 2018-03-12T00:00:00.000 | {
"year": 2018,
"sha1": "7006d10aa82ee6fc02f83b8c15b64a8f30e9718a",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-018-1999-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7006d10aa82ee6fc02f83b8c15b64a8f30e9718a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11255836 | pes2o/s2orc | v3-fos-license | Scar extent evaluated by late gadolinium enhancement CMR: a powerful predictor of long term appropriate ICD therapy in patients with coronary artery disease
Background Coronary artery disease (CAD) patients are at risk for life-threatening ventricular arrhythmias (VA) related to scar tissue. Late gadolinium enhancement cardiovascular magnetic resonance (LGE-CMR) can accurately identify myocardial scar extent. It has been shown that scar extent, particularly scar transmurality, percent scar and scar mass, are associated with the occurrence of appropriate implantable cardioverter-defibrillator (ICD) therapy. However, quantification of transmurality extent has never been studied. The purpose of our study was to evaluate whether different methods quantifying scar transmurality, percent scar and scar mass (assessed with LGE-CMR) can predict appropriate ICD therapy in CAD patients with a long term follow-up period. Methods and results We enrolled retrospectively 66 patients with chronic CAD referred for primary or secondary preventive ICD implantation and LGE-CMR before ICD implantation. Using LGE-CMR, scar extent was assessed by measuring scar mass, percent scar and transmural scar extent using four different methods. The median follow-up duration was 41.5 months (interquartile range 22–52). The endpoint was the occurrence of appropriate device therapy and occurred in 14 patients. Pre-ICD revascularization and transmural scar extent were significantly associated with the study endpoint but the latter was especially highly dependent on the method used. Patients with appropriate device therapy had also larger scar mass (29.6 ± 14.5 g vs 17.1 ± 8.8 g, p = 0.004), and larger percent scar (15.1 ± 8.2% vs 9.9 ± 5.6%, p = 0.03) than patients without appropriate device therapy. In multivariate analysis, scar extent variables remained significantly associated with the study end-point. Conclusions In this study of CAD patients implanted for primary or secondary preventive ICD, pre-ICD revascularization and scar extent studied by LGE-CMR were significantly associated with appropriate device therapy and can identify a subgroup of CAD patients with an increased risk of life-threatening VA. Depending of the method used, transmural scar extent may vary significantly and needs further studies to obtain a validated and consensual study method.
Background
Sudden cardiac death (SCD) is the most frequent cause of death in patients with coronary artery disease (CAD) [1]. Implantable cardioverter-defibrillator (ICD) implantation is a recognized beneficial therapy to prevent SCD related to ventricular arrhythmias (VA) in patients with low left ventricular ejection fraction (LVEF). However, identifying patients at high SCD risk remains a difficult challenge. Assessment of altered LVEF is still considered as the best discriminant factor of high risk SCD patients with CAD [2]. However its predictive accuracy is low [3]. Thus, many patients who receive ICD therapy in the light of current guidelines will never benefit from the device. Post hoc analysis of the MADIT II study population showed that only 35% of the patients who received an ICD for primary prevention receive appropriate device therapy during the first 3 years of followup [4]. Accordingly, better selection criteria for ICD implantation must be found.
Myocardial scar has been demonstrated as a substrate for malignant reentrant VA that may underlie SCD [5]. Late gadolinium enhancement cardiovascular magnetic resonance imaging (LGE-CMR) can accurately and reproducibly identify myocardial scar tissue and its extension [6,7]. The amount, as well as the transmural extent of myocardial scar tissue on LGE-CMR has been shown to predict overall mortality in patients with CAD independently of the LVEF [8,9] and thus may identify patients at risk of SCD. However, uniformity in the analysis of the LGE-CMR parameters is lacking and the different methods proposed to quantify the scar are not equivalent and seem poorly reproducible [10]. The most robust and reproducible parameters to quantify scar extent and to predict appropriate device therapy, appeared to be the scar transmurality and the amount of scar (percent scar and scar mass) [10][11][12][13].
The purpose of this study was to evaluate whether different methods quantifying scar transmurality, percent scar and scar mass (assessed with LGE-CMR) can predict appropriate ICD therapy in CAD patients with a long follow-up period.
Study population
The study was conducted in a retrospective observational manner in our cardiology department, at the Caen University Hospital (Normandy, France) during a period of 4 years (2006-2009), on 66 consecutive patients with CAD who had undergone LGE-CMR prior to primary or secondary ICD implantation. Ethical committee study procedures were in accordance with the Declaration of Helsinki. The study protocol did not require institutional review board approval since the study was performed retrospectively, only observational and patient data were anonymized and only patients from the Caen University Hospital Center (Caen, France) were included.
CMR
All patients were scanned on a dedicated 1.5 T CMR scanner (Signa, GE Medical systems, Waukesha, WI) using a cardiac 5-element phased-array receiver coil. Images were acquired during breathholds of approximately 15 seconds using vector ECG gating. After initial localizer sequences, a stack of steady-state free precession cine images were acquired in the short axis plane from the level of the mitral valve annulus to the left ventricular (LV) apex. Contrast-enhanced images were acquired approximately 15 minutes after bolus injection of gadoterate meglumine, Dotarem W 0.15 mmol/ kg (Guerbet, Aulnay-sous-Bois, France) using a standard 2-dimensional inversion recovery gradient-echo sequence [14].
CMR image post-processing and data analysis All analyses were performed by an experienced cardiologist blinded to patient history using the freely available validated cardiovascular image analysis software package Segment v1.9 (http://segment.heiberg.se) [15,16]. Shortaxis cine images were used to measure end-diastolic volume, end-systolic volume, LV mass and LVEF by standard methods. Papillary muscles were regarded as part of the ventricular cavity. Scar analysis was performed using short axis LGE-CMR images. Endocardial and epicardial LV borders as scar tissue were semi-automatically delineated in each LV short-axis slice and then manually corrected. We worked with a binary approach to characterize scar tissue (normal myocardium vs. scar tissue). Three aspects of scar were quantified: the percent scar (percentage of the total LV volume), the scar mass and the transmural scar extent. The percent scar was calculated by summing the absolute amount of hyperenhanced tissue for all LV short-axis slices divided by the total amount of LV tissue. The scar mass was obtained by multiplying the percent scar by LV mass. For the transmural scar assessment, we used four different methods: 1) "scar transmurality area based" (STAB) based on the quantification of LV surface reached; 2) "scar transmurality line based" (STLB) based on the radial extent of late enhancement between the endocardial and epicardial borders; 3) "weighted infarct transmurality" (WIT) based on the LV segment mass reached weighted by pixel intensity to account for partial volume effect [17]. These first three methods are normalized by the segment AHA area and, consequently, lose their spatial information of scar. We therefore used the last method: 4) "spatial maximal scar transmurality" (SMST) that picks up maximal transmurality in the sector whereas the other three methods pick up different aspects of average transmurality. This method considers only the spatial distribution of scar, not its quantity compared with healthy tissue (Figure 1). The rationale behind using 4 different methods to quantify transmurality was to demonstrate that using the same term "transmurality" in different studies was not necessarily synonymous with comparable results. For the four methods, the transmural scar extent was split into quartiles (1-24%, 25-49%, 50-74% and 75-100%) [11], and the number of LV segments expressed in the standard American Heart Association 17-segment model [18]. For all methods, a presence of scar ≥ 75% was defined as transmural. All measurements were repeated in 18 patients by the same observer and by a second observer, blinded to the results of the first analysis, to assess intra-observer and inter-observer agreement.
ICD implantation and details
All patients received an ICD according to international guidelines [19]. Some patients were eligible for cardiac resynchronization therapy (CRT) and received a combined CRT-D device as recommended [20]. All manufacturers were represented for either ICD ( Ventricular tachycardia (VT) zone was programmed from 171 ± 8/min to 217 ± 7/min with antitachycardia pacing therapy (ATP) (burst and/or ramp) then shock therapies. Arrhythmias faster than 217 ± 7/min were assigned to the ventricular fibrillation (VF) zone with maximal shocks as the first line therapy.
Follow-up, events and end-point
Follow-up started at ICD implantation. All patients were followed 3 months after ICD implantation and then every 6 months. In our institution, some patients have a remote management system but were still followed as out patients every 6 months. Patients were instructed to contact the clinic after experiencing an ICD discharge for an additional visit. The median follow-up duration was 41.5 months (interquartile range 22-52). Appropriate ICD therapy was defined as ATP and/or shock therapy for VT or VF and was chosen as the study end-point. Appropriate arrhythmia detection and discrimination was confirmed by analysis of stored electrograms by two electrophysiologists blinded to the CMR analysis. When an appropriate ICD therapy occurred, acute reversible causes (particularly electrolyte disorders) and acute myocardial ischemia as a trigger for arrhythmic events were ruled out by electrocardiogram, standard blood examination and negative troponin levels. ICD therapy was classified as inappropriate when triggered by sinus or supraventricular tachycardia, T-wave oversensing, or electrode dysfunction.
Statistics
Statistical analyses were performed on the R software version 2.14.0 (R Development Core Team, Vienna, Austria). Categorical variables were expressed as percentages (numbers) and compared using Fisher's exact test between the two groups (receiving an appropriate ICD therapy or not). Continuous variables were presented as mean ± standard derivation and were compared between the two groups using a Student's t test, or Mann-Whitney U test, if not normally distributed. The associations between the probability over time of receiving an appropriate ICD therapy and all clinical, electrocardiographic and CMR variables present in Tables 1 and 2 were first assessed in univariable Cox proportional hazards models (or by a log-rank test in case of a categorical variable for which the Cox model did not converge) but for the sake of clarity only significant variables (and LVEF and amiodarone share their clinical significance) are shown in Table 3. A multivariable model was then constructed with the most significant scar and clinical variables in the univariable analysis, respectively the scar mass and any previous pre-ICD revascularization as the covariable. We inserted only two covariables in the multivariate model due to the small number of patients receiving an appropriate ICD therapy (n = 14), and we used only one scar variable in the multivariate model because of the high collinearity between scar, percent scar and the different transmurality quantification method. The best model was defined by the log-likelihood test. Unadjusted and adjusted hazard ratios (HR) with their corresponding 95% confidence interval (CI) were reported. We therefore performed Receiver operator characteristic (ROC) analyses on significant predictors. In all analyses, a p value less than or equal to 0.05 was considered statistically significant.
Study population
During the study period, 66 patients with new ICD implants for CAD with a LGE-CMR prior to device implantation were included. Their baseline characteristics are shown in Table 1. Fifty-nine (89%) patients presented as ST-elevation MI, 39 patients (66%) received thrombolytic therapy and the other 20 patients (34%) were treated by primary percutaneous coronary intervention. Fifty-one patients (78%) presented with an initial 0 TIMI flow, 38 patients (58%) with a multivessel impairment and 56 patients (85%) were successfully treated by percutaneous coronary angioplasty. The median time frame between LGE-CMR and the respective coronary ischemic event was 4 months (interquartile range 3-6) and with ICD implantation was 3.4 ± 1.9 months. In all patients LGE-CMR was performed to guide the need for potential revascularization prior to ICD implantation including an assessment of myocardial viability. If necessary, a pre-ICD revascularization was performed before ICD placement (with a mean time frame of 1.7 ± 0.3 months). For the 41 patients who had pre-ICD revascularization, the LVEF did not significantly improve after revascularization and so did not modify the ICD indication.
Follow-up and events
During a median follow-up of 41.5 months (interquartile range 22-52), study endpoint criteria was met in 14 patients (21%) and 10 patients died (15%). Non-cardiac death was reported in 2 patients (3%). Cardiac death occurred in 8 patients (12%): 7 patients (11%) died of endstage heart failure and 1 patient (11%) died after heart transplant complications. In patients with appropriate ICD therapy, there was no significant difference between primary (n = 11) and secondary (n = 3) prevention indication (p = 0.90). Appropriate device therapy occurred 21 ± 20 months after ICD implantation. Eight of the 14 patients (57%) who presented the study endpoint were treated with ATP directly followed by shock or shock therapy only. The remaining 6 patients (43%) were successfully treated with ATP therapy only. The mean VT heart rate was 212 ± 32 bpm.
CMR variables
CMR findings are listed in percent scar was 11 ± 6.5%, the mean scar mass was 20 ± 11 g. As demonstrated in Table 2, depending on the method used, the transmurality quantification could differ significantly. The percent scar, the scar mass, the number of LV segments with SMST ≥ 75% and the number of LV segments with STLB ≥ 50% (3.5 ± 2.4 vs 2.0 ± 1.6, p = 0.027) were significantly larger in patients who received appropriate ICD therapy compared with those who did not receive appropriate ICD therapy. For the STAB, WIT and SMST methods, the number of LV segment with a transmural extent ≥ 50% was not different between patients who did and did not met the study end-point.
All measurements were repeated in 18 randomized patients by the same observer and by a second observer, blinded to the results of the first analysis. The intraclass correlation coefficient for scar extent quantification was 0.91 for intra-observer agreement and 0.73 for interobserver agreement (p < 0.001 for both), demonstrating high reproducibility.
Predictors of appropriate ICD therapy
As shown in Table 3 and Figure 2, univariate variables significantly associated with the study end-point were the number of transmural (≥ 75%) scar segments studied by the STAB method, number of segments with a scar extent ≥ 50% studied by the STLB method, total number of segments presenting scar studied by the SMST method, the percent scar, the scar mass, and any previous pre-ICD revascularization. Notably, LVEF (p = 0.14), QRS width (p = 0.14) and amiodarone use (p = 0.13) were not associated with the study end-point.
In our multivariate models, we could only retain the two most significant parameters due to our small study population. We took as scar variables the scar mass and the percent scar due to univariate analysis results. We included the notion of revascularization since it was the most strongly clinical parameter associated with appropriate device therapy (Table 3). Scar variables remained strongly associated with the occurrence of appropriate ICD therapy but the strongest association was with the scar mass (HR 3.15; 95% CI 1.35-7.33; p < 0.001 and HR 10.8; 95% CI 2.1-53.6; p = 0.001).
CMR scar and Kaplan-Meier analysis
Median values of scar variables significantly associated with appropriate device therapy were used to individualize the risk of appropriate ICD therapy in this population. For the number of scar segments with transmural extent, patients were again separated into two groups based on the STLB method. We chose the STLB method because it presented the strongest association with the study endpoint. Appropriate ICD therapy occurred in 11 of 30 patients with > 2 segments with a scar extent ≥ 50% compared with only 3 of 36 patients with ≤ 2 segments (p = 0.005). For the entire study population, the negative predictive value of ≤ 2 segments with a scar transmural extent ≥ 50% was 92% and sensitivity was 79% and the specificity 63%. For the scar mass (median value 20 g, 28 patients with a large scar mass > 20 g and 38 patients with a small scar mass ≤ 20 g), 10 patients (36%) with a large scar mass received appropriate ICD therapy compared with only 4 patients (11%) with a small scar mass (p = 0.01). The negative predictive value of a small scar mass was 90%, the sensitivity 71% and the specificity 65% for the entire study population. For the percent scar (median value 11%, 27 patients with a percent scar > 11% and 39 patients with a percent scar ≤ 11%), 9 patients (33%) with a large extent of scar received appropriate ICD therapy compared with only 5 patients (13%) with a small extent of scar (p = 0.04). The negative predictive value of a small extent of scar was 87% for the entire study population. Kaplan-Meier survival curves for appropriate ICD therapy-free survival were calculated between patient groups stratified by median scar indices (scar mass, percent scar and number of transmural scar segments) (Figure 3).
Discussion
Patients with chronic CAD are at risk of developing cardiovascular events and particularly VA, but different risk profiles exist. Traditional clinical indicators such as depressed LVEF, are still used to identify patients at risk of SCD [21], however these markers have low predictive values and many patients with an ICD will never benefit from the implantation [4]. Finding new indicators therefore remains a challenge. In our study, the only clinical parameter statistically associated with the study end-point is the notion of pre-ICD revascularization (Table 3). This finding is consistent with data from the literature. Indeed, the absence of pre-ICD revascularization was demonstrated as a risk factor for SCD and VA by Barsheshet et al [22]. In a multivariate analysis, they demonstrated that patients without prior revascularization had a 48% increased risk (p = 0.01) of VT/VF or death. This indicates that in addition to the scar tissue, myocardial region not revascularized before ICD implantation may predispose to recurrent ischemia or hibernation and could be an associated substrate for VA occurrence. They also showed that the association between pre-ICD revascularization and arrhythmic risk was similar among patients who underwent either coronary artery bypass graft (CABG) or percutaneous coronary intervention (PCI) as the last revascularization procedure prior to enrollment. In our study, there was a significant difference between PCI and CABG, but this result must be interpreted in the light of our relatively small cohort.
After a MI, scar tissue serves as an important substrate for VA, based on a re-entry phenomenon [23]. Consequently, assessment of scar extent by LGE-CMR could be useful for risk stratification of CAD patients. In this retrospective and observational study with long term follow-up, we confirmed that indices of LV scar extent, quantified by LGE-CMR, are associated with the occurrence of appropriate ICD therapy in CAD patients, independently of LVEF. These results are consistent with data in the literature. Scott et al. quantified myocardial scar in a 64 patients study with a mean follow-up of 19 ± 10 months [11]. The mean number of myocardial segments with transmural scar was 2.3 ± 2.1 and the mean percent scar was 14 ± 10%. These two criteria were significant predictors of appropriate device therapy (p = 0.001 and 0.02 respectively). These findings were also confirmed by Boyé et al. in 52 patients with chronic MI referred for primary preventive ICD implantation [24]. Infarct size was significantly larger in patients with appropriate device therapy or death (24 ± 8 g vs 16 ± 12 g, p = 0.02). One reason explaining that percent scar and scar mass are more significant than the transmural extent could be that in patients following MI, subsequent increases in infarct size correlate closely with the transmural extent of infarction, so increases in infarct size reflect increasing transmurality given the same area at risk [25].
Quantitatively assessing infarct extent after MI has been a challenge for many years due to its important clinical implications. The concept of transmurality in humans is supported by coronary perfusion which is performed from the subepicardial to the subendocardial regions, resulting in a variable and heterogeneous extent in the myocardial wall in case of acute non-Q-wave MI and excess vulnerability of the subendocardial region to ischemia [26]. The goal during the following years was to develop methods to assess infarct size in order to prevent the development of myocardial injury following an acute MI. In this sense, special attention was paid to transmurality. Indeed, a large extent of transmural necrosis is known to induce deleterious remodeling [27] by loss of circular strain [28] that may itself be the cause of congestive heart failure. LGE-CMR is rapidly becoming the standard method due to its efficiency in detecting and distinguishing viable and nonviable myocardium [7,29]. To date, the transmurality concept is constantly used to assess infarct extension and severity but our understanding of transmurality is largely based on animal models [30,31] which may cause errors when transposing to humans. It should also be noted that currently, validation of methods to assess infarct size and transmurality after MI is limited [32] leading to mixed results for the same term "transmurality". This problem is well illustrated in our study. We observed a significant association between transmurality and appropriate ICD therapy but this is highly dependent on the method used ( Table 2). The first 3 methods (STAB, STLB, WIT), by studying scar transmurality with the same approach based on normalization of the total area of the AHA segmentation regardless of the spatial concept of transmurality, are therefore more sensitive for scar mass than for transmurality. They tend to underestimate scar transmurality (most patients present a scar transmurality < 50%, Table 2). Moreover, as we can see in Figure 4, these 3 parameters are strongly correlated (R ≥ 0.959 for all 17 segments, p = 0). Conversely, the fourth method, by studying scar transmurality while considering spatial information, is more a reflection of the transmurality as the scar mass and therefore tends to detect considerable transmural damage (most of patients present a scar transmurality ≥ 75%, Table 2). In view of these results and the absence of consensus about transmurality quantification, the study of scar transmurality must remain a secondary endpoint in the CMR evaluation and should not participate to the decision of ICD implantation.
Recently, several studies abandoned the binary approach of scar quantification (scar tissue vs normal myocardium) and focused on the border zone around an infarct (periinfarct zone) also determined by LGE-CMR. These studies have suggested that these parameters could be able to predict mortality, inducibility of VA, and the occurrence of appropriate ICD therapy [11,[33][34][35]. However, these parameters seem to be time consuming, relatively operatordependent, and are difficult to practice on a daily basis for the risk stratification of CAD patients [10]. In 55 patients and during a mean follow-up of 2 years, De Haan et al. evaluated previously validated methods of scar assessment by LGE-CMR in their ability to predict VA [10]. They suggested that quantification of total scar size with the binary approach (scar tissue vs. normal myocardium) is better and sufficient for SCD risk stratification of CAD patients. Moreover, in a recent experimental trial, Schuleri et al. studied the temporal evolution of the peri-infarct zone.
They showed that the peri-infarct zone is dynamic and decreases over time and after a reperfused myocardial infarction [36].
Limitation
This study was a unicenter observational trial, with a relatively small sample size and a small number of appropriate device therapies so the present conclusions require confirmation in larger cohort, prospective and multicenter studies. Moreover, from a pragmatic point of view, a cut-off value is needed to link scar extent to the identification of patients who are most likely to benefit from ICD implantation. Unfortunately, our study design prevents this. We have not been able to compare the extent of fibrosis in patients with primary and secondary prevention due to the small number of patients with secondary prevention. Furthermore, this trial included patients with CRT-D. Since biventricular pacing may also diminish the susceptibility to VA [37], this may have introduced a bias, although no significant difference was seen in the prevalence of CRT-D between the patients with and without appropriate device therapy. Finally, our sequences did not include coverage of the entire myocardium (just 1 short axis slice was missing), so we could possibly have missed small areas of scar.
Conclusion
In this single-center study of patients with CAD and ICDs, we demonstrated a strong association between myocardial scar extent characterized by LGE-CMR with a binary approach (scar tissue vs. normal myocardium) and appropriate ICD therapy, independently of LVEF. Depending on the method used, transmural scar extent can predict appropriate ICD therapy in CAD patients but requires a validated and consensual study method. We hypothesize that this patient population could benefit from this study by using the scar extent to improve risk stratification strategies in CAD wanting to receive for an ICD. | 2017-06-24T17:52:14.069Z | 2013-01-19T00:00:00.000 | {
"year": 2013,
"sha1": "d9c4777f52a5200df15b3354a8e31907044155be",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/1532-429X-15-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58371bf736038e8ae54005b5c9c1b94ebf6ac4c3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
37156939 | pes2o/s2orc | v3-fos-license | New Tools for Molecular Therapy of Hepatocellular Carcinoma
Hepatocellular carcinoma (HCC) is the most common type of liver cancer, arising from neoplastic transformation of hepatocytes or liver precursor/stem cells. HCC is often associated with pre-existing chronic liver pathologies of different origin (mainly subsequent to HBV and HCV infections), such as fibrosis or cirrhosis. Current therapies are essentially still ineffective, due both to the tumor heterogeneity and the frequent late diagnosis, making necessary the creation of new therapeutic strategies to inhibit tumor onset and progression and improve the survival of patients. A promising strategy for treatment of HCC is the targeted molecular therapy based on the restoration of tumor suppressor proteins lost during neoplastic transformation. In particular, the delivery of master genes of epithelial/hepatocyte differentiation, able to trigger an extensive reprogramming of gene expression, could allow the induction of an efficient antitumor response through the simultaneous adjustment of multiple genetic/epigenetic alterations contributing to tumor development. Here, we report recent literature data supporting the use of members of the liver enriched transcription factor (LETF) family, in particular HNF4α, as tools for gene therapy of HCC.
Introduction
Hepatocellular carcinoma (HCC) is one of the most common cancers worldwide and the most frequent among the primary tumors of the liver. HCCs are phenotypically and genetically heterogeneous tumors, since they often develop on the pathological background of pre-existing chronic liver diseases, including fibrosis or cirrhosis in consequence of HBV and HCV infections, alcoholic injury, or autoimmune hepatitis, that impair organ function and reduce the efficacy of common cancer therapies [1]. Moreover, most HCC patients are diagnosed at advanced stages of disease when the high tumor recurrence rate and the tendency to metastasize make current treatments ineffective and the prognosis poor [2].
In recent years, intense pre-clinical and clinical research have been devoted to the development of tailored therapeutic molecules, capable of restoring the physiological cell functions lost in transformed hepatocytes, through the gene therapy of HCCs. Gene therapy is a promising approach, since it is possible to deliver vectors directly into hepatic tumors, reducing potential side effects derived from transduction in non-target cells. Molecules utilized in current protocols include genes for proteins or microRNAs (miRNAs) displaying antitumor properties (anti-proliferative, pro-apoptotic, anti-angiogenic, or immunomodulatory).
Unfortunately, highly effective results have not been obtained so far, due to the low efficiency of the gene transfer [3] and to the genetic heterogeneity of HCCs [4]. For this reason, the most promising candidates would be oncosuppressor genes able to induce an efficient antitumor response without a specific correction of multiple mutations contributing to tumor development (e.g., p53) or differentiation-specific master genes, able to act as reprogramming transcriptional factors, coordinating extensive gene expression. In the context of the latter strategy, we will discuss recent progress in the knowledge of HCC biology and genetics supporting the use of Liver Enriched Transcription Factors (LETFs), and in particular of hepatocyte nuclear factor 4α (HNF4α), as promising candidates for targeted gene therapy of HCCs.
In addition, recent findings highlight how epigenetic alterations are commonly observed in human HCCs [11,12] and can be exploited as clinical predictors for diagnosis and prognosis [13]. These alterations include aberrant methylation of tumor suppressor genes [13], post-translational histone modifications [14,15], and altered expression profile of miRNAs [16,17].
The progression of HCC toward more aggressive stages, responsible for the worst prognosis in patients, is frequently associated to the activation, in transformed hepatocytes, of a transdifferentiation process: the epithelial-to-mesenchymal transition (EMT). EMT contributes to tumor progression through the loss of epithelial/hepatocyte cell differentiation, the acquisition of motility/invasivity properties and cancer stem cell traits, the resistance to apoptosis, and metastasis (reviewed in [18]). Overexpression of EMT markers (i.e., Snail and Twist) has been reported in invasive areas of primary tumors [19] and in metastasis of aggressive hepatocarcinomas [20], and were associated with poor prognosis. Their analysis in circulating tumor cells has been recently proposed as prognostic tools for HCC patients [21]. The role of EMT master genes, in particular Snail, in tumor progression was found to be mediated by (i) the direct transcriptional repression of an extensive amount of target genes involved both in epithelial (e.g., E-cadherin) [22] and hepatic (e.g., HNF4α) [23] differentiation, (ii) the increase of mesenchymal gene expression [23], and iii) the miRNA-mediated up-regulation of stemness genes [24]. The acquisition of EMT-related stem cell characteristics has been demonstrated to positively correlate with HCC progression [25]. The presence of stemness traits in HCC tumor cells, indeed, has been associated with chemoresistance and tumor recurrence after surgery [26,27] and can contribute to the intratumoral heterogeneity of HCC tumor cells [28].
Non-Cell-Autonomous Cues
An important role in HCC is also played by non-cell-autonomous cues, such as the presence of factors in the tumor niche promoting tumor growth or influencing proliferation/activation of tumor-associated fibroblasts [29,30].
In particular, the role of TGFβ cytokine in the progression of HCC was largely described. [43].
Recent reports highlighted a role of biophysical changes in extra-cellular matrix stiffness as microenvironmental cues influencing tumor growth and progression. Fibro-cirrhotic livers, for example, are characterized by a significant increase of ECM stiffness [44]. YAP/TAZ were recently identified as molecular relay of mechanical stimuli exerted by ECM stiffness, inhibited by the Hippo signaling pathway and involved in organ size control [45]. Dysregulation of the Hippo/YAP cascade has been recently reported for several human tumors, including HCC, and correlates with increased cell proliferation and survival, acquisition of stemness properties, and metastasis (reviewed in [46]). In particular, YAP overexpression was found in human HCC samples [47,48] and correlates with poor prognosis of HCC patients [49]. Furthermore, its inhibition in cells from advanced HCC restores hepatocyte differentiation inducing the up-regulation of master factors (i.e., HNF4α/FOXA1/FOXA3) and leading to tumor regression [50]. Interestingly, YAP protein is directly involved in switching occupancy of HNF4α on embryonic hepatoblast genes to adult hepatocyte genes [51], suggesting a direct role of YAP in influencing the function of key transcriptional factors and master genes of hepatocyte differentiation.
Gene Therapy of HCC: Is It a Good Deal?
As highlighted above, current therapies for HCC are still ineffective. Surgical liver resection efficiency is limited to small localized tumors with low risk of recurrence in non-cirrhotic patients. Conventional chemotherapy is largely unsuccessful due to tumor cell resistance and side effects of "non-selective" cytotoxic drugs. Furthermore, the immunosuppression associated to HCC (mainly subsequent to chronic HBV and HCV infections) negatively impacts on tumor recurrence. For this reason, immuno-based therapies have been proposed to ameliorate the clinical outcome of HCC patients (reviewed in [52]).
Targeted approaches have also been applied, in particular for advance-stage and unresectable HCC. These treatments include oral administration of the multikinase inhibitor sorafenib, or single target agents, such as gefitinib and erlotinib, currently involved in ongoing clinical trials in the US and EU [53]. The therapy with sorafenib, in particular, showed prolonged median overall survival and delayed the median time to progression in patients with HCC, showing limited and manageable adverse effects [54]. However, chronic liver diseases that usually underlie HCC may enhance the hepatotoxicity of these agents; accordingly, the prognosis of late-stage HCC patients is still poor [53].
In this context, the targeted gene therapy for the management of HCC seems to be the most promising approach. In particular, the adenoviral mediated gene therapy is well documented and included in several human clinical trials where the tolerance is high and side effects acceptable in most of the cases (reviewed in [55]). However, the efficiency of transduction and the tumor specificity still remain limiting factors for this approach. In HCC, these problems could be overcome by intratumoral administration of vectors and/or by the use of tumor-specific promoters that may restrict the delivery to hepatocytes (e.g., AFP) [56], especially improving efficacy and minimizing the toxicity of this therapeutical strategy.
Among different approaches of gene therapy (restoration of oncosuppressors, delivery of suicide genes, or inhibition of oncogenes) the delivery of "differentiating" factors could achieve the best results in terms of low toxicity and maintenance of tissue homeostasis, especially compared to killing drugs or agents inducing apoptosis. The most severe consequence may be related to the damage of the stem cell compartment with the decreased number of cells (stem cells or progenitors) responsible for tissue renewal. However, in the liver, the real involvement of resident liver stem/precursor cells in hepatic regeneration after chronic injury is strongly debated since it has been recently formally proved that adult hepatocytes originate from self-duplication of other hepatocytes rather than from stem cell differentiation [57,58]. Altogether, this knowledge suggests that the ectopic expression of differentiation master genes in the liver could be tolerated and the side effects reduced.
LETFs as Molecular Tools for Gene Therapy of HCC
Maintenance of hepatocyte differentiation and control of liver-specific gene expression are attributed in large part to hepatocyte nuclear factors (HNFs) belonging to the LETF family, including HNF1α, HNF4α, HNF6, and FOXA2 [59]. Being reciprocal transcriptional activators, they operate cooperatively in a connected network in the liver, regulating several developmental and metabolic functions in hepatocytes [60,61].
HNF4α
The nuclear receptor HNF4α is a key regulator of hepatocyte differentiation during embryonic development [62,63], influencing the expression of other hepatic transcription factors, and stabilizing co-regulatory networks for the maintenance of a differentiated phenotype [61]. In the adult liver, HNF4α is highly expressed in hepatocytes. HNF4α maintains hepatocyte identity both by inducing epithelial/hepatic differentiation through a direct regulation of epithelial and metabolic target genes [62,64], and by actively inhibiting mesenchymal differentiation program through a direct repression of mesenchymal and EMT master genes [65]. Accordingly, experimental HNF4α deletion in adult mouse livers has been shown to lead to dedifferentiation and proliferation of hepatocytes, hepatomegaly, and expansion of precursor cells (i.e., oval cells) [66,67].
HNF4α is a strong inducer of mesenchymal-to-epithelial transition (MET). Its ectopic expression in fibroblast [62] and F9 cells [68] is sufficient to trigger epithelial gene expression and acquisition of epithelial polarity. Furthermore, HNF4α, together with FOXA1, FOXA2, or FOXA3, was found capable of inducing the direct reprogramming of mouse fibroblasts into hepatocyte-like cells [69].
Importantly, in addition to the transcriptional regulation of mRNAs, HNF4α regulates the expression of miRNAs which, in turn, can act as pleiotropic elements influencing differentiation, EMT, stemness, and hepatocarcinogenesis.
In particular, HNF4α (as well as other LETFs) was found to directly regulate expression of the liver-specific microRNA-122 (miR-122) [70], the most abundant miRNA in hepatocytes, and the first miRNA suggested as a tumor suppressor in the liver. Its expression, indeed, is frequently reduced in HCCs [71] and is associated with low differentiation, migration/invasivity of HCC cells [72,73], and poor prognosis in patients [72]. Mir-122 restoration in HCC cells leads to a reduction of mesenchymal markers [74], cell-cycle arrest or apoptosis [75], and sensitizes cells to antitumor agents [76,77]. Notably, miRNA-122 delivery in HCC murine models impaired tumor occurrence, growth, and progression [73,78].
Recently, the transcriptional regulation of other miRNAs, i.e., members of the miR-200 family and miR-34a, by HNF4α has been described and showed to contribute to the active repression of stem cell genes [24]. Both miR-200 family members and miR-34a were suggested to function as tumor suppressors in HCCs. They appeared markedly down-regulated in HCC [79,80] and their restoration in various cancer stem cells is associated with the loss of stem cell traits, inhibition of EMT, cell differentiation, and decreased motility/invasivity [24,81,82]. However, the role of miR-34a in cancer is currently debated [83] and, in HCC, has been related to the cellular context [84].
It has been recently shown that HNF4α controls the epigenetic state of differentiated hepatocytes through the miR-29-mediated DNMT3A,B down-regulation [85]. Interestingly, low levels of miR-29 and DNMT3A,B up-regulation correlate with TGFβ-induced EMT, liver fibrosis, and aggressiveness of HCC [86][87][88]. Being the epigenetic changes, including DNA methylation, sustained by the presence of high levels of DNMTs, involved in both EMT [89] and hepatocarcinogenesis [90], miR-29 could represent a good target for a therapeutic approach aimed at the epigenetic reprogramming of HCC cells.
Several lines of evidence indicate HNF4α as a potential tumor suppressor of HCC. In mature hepatocytes, loss/inactivation of its function resulted in an increased risk for development of HCC. Transient inhibition of HNF4α is sufficient to initiate hepatocellular transformation in non-transformed hepatocytes and to increase invasiveness in transformed HCC cell lines through a microRNA-mediated inflammatory loop circuit [91]. This network can also contribute to the maintenance of HNF4α inactivation during hepatocellular transformation [91]. Several studies have shown a decreased expression of HNF4α in both murine models and human samples of HCC, thus indicating a critical role of this protein in the HCC onset/progression [7,92,93]. As a consequence, the restoration of HNF4α expression/function in HCCs has represented, in the last few years, an important goal for molecular approaches to HCC treatment. The whole described tumor-suppressing functions of HNF4α indicate that this protein represents a good candidate for the extensive reprogramming of tumor cells and, therefore, a promising tool for gene therapy of HCC.
Several data substantiate this expectation. Forced HNF4α expression in dedifferentiated and aggressive HCC is sufficient to reduce tumor cell motility/invasivity by inducing differentiation and EMT inhibition [65,92]. Moreover, HNF4α overexpression attenuates hepatic fibrosis and, in fibrotic livers, can prevent HCC occurrence by blocking the activation of myofibroblasts [93,94]. Furthermore, overexpression of HNF4α in rodent HCC models blocks carcinogenesis and metastasis [93,95].
Overall, the restoration of the HNF4α functions in invasive HCCs has been proven to be an efficient approach for the gene therapy of HCC, at least in experimental models. However, recent data have shown how microenvironment cues could reduce the efficacy of this approach. In particular, the presence of TGFβ in the tumor niche impaired HNF4α activity by inducing the displacement of the ectopic protein from its target gene promoters through the inactivation of GSK-3β activity [96]. This result suggests the need to obtain improved HNF4α proteins as tools for gene therapy, through the design of TGFβ-insensitive mutants.
At the same time, the potential tumor suppressor activity of other members of the LETF family should be explored. Recently, the role in tumor suppression of HNF1α and HNF6, has been described. Similarly to HNF4α, indeed, these proteins are down-regulated in HCC and their overexpression in tumor cell lines was found to suppress EMT and invasion [92].
HNF1α
HNF1α is a homeodomain protein that plays a critical role in hepatocyte differentiation. It contributes to the expression of products central in normal hepatic functions [97] and is required for the maintenance of the differentiated state [98]. HNF1α, moreover, together with HNF4α and HNF6, leads to the generation of functional human-induced hepatocytes (hiHeps) from fibroblasts [99] and its overexpression is necessary for the direct reprogramming of human fibroblasts to hepatocyte-like cells [100].
Extensive evidence suggested that HNF1α acts as a tumor suppressor gene and that its down-regulation contributes to the development of HCC. HNF1α gene was found mutated in 84% of cases of adenomas, including familial forms [101,102], and HNF1α protein levels were found significantly reduced in moderately-and poorly-differentiated tissues from HCCs [6]. Furthermore, HNF1α knock-out mice exhibit tumor-associated characteristics, such as increased proliferation of hepatocytes, leading to a dramatic liver enlargement and liver function defects [103].
Taken together, these findings suggested that restoration of HNF1α functions in HCC could restrain tumor proliferation and progression. Zeng et al. recently demonstrated that the forced re-expression of HNF1α in human hepatoma cell lines induces a re-establishment of hepatic differentiation through the significant induction of liver specific genes and the repression of cell proliferation. Most importantly, intratumoral HNF1α transduction significantly inhibits tumor growth in mice and eradicates HCC nodules after systemic delivery [104].
HNF1α is not only a promising therapeutic tool for a differentiation therapy in HCC treatment but it could be also a potent anti-EMT tool, being a strong transcriptional repressor of EMT master genes as HNF4α [65]. Accordingly, suppression of HNF1α in HCC cell lines triggers expression of mesenchymal and EMT master genes, overexpression of TGFβ, and migration [105].
HNF6/ONECUT1
HNF6 represents another potential molecular tool for tumor suppression in HCC. It is essential for expression of hepatic genes, also controlling the direct expression of HNF4α [106] and genes involved in glucose metabolism [107,108]. Moreover, HNF6 synergistically cooperates with HNF4α and HNF1α for the regulation of hepatocyte differentiation during development and in the adult. HNF6 is also a strong transcriptional activator of miR-122, establishing a positive feedback loop responsible for in vivo hepatocytes differentiation [109] that may contribute to prevent neoplastic transformation.
HNF6, as well as other LETFs, is involved both in the maintenance of the epithelial differentiation and in the active repression of EMT program through the up-regulation of p53 tumor suppressor [110]. Furthermore, it is implicated in the inhibition of HCC progression [92].
HNF6 overexpression reduced the proliferation of liver cancer cell lines [111], inhibited colony formation and cell proliferation/migration in carcinoma cells, and decreased the formation of tumors in nude mice [110]. Conversely, knockdown of HNF6 induced EMT and increased cell migration [110]. Furthermore, HNF6 has been recently shown to interfere, in vitro and in vivo, with HBV infection through the inhibition of viral gene expression and DNA replication [112].
Notably, a potential inhibitory effect of HNF6 on TGFβ signaling has recently been reported. Components of TGFβ signaling pathway were activated in HNF6 knockout mice, at least in part, through the up-regulation of TGFβRII expression [113]. Interestingly, through the inhibition of TGFβ/activin signaling, HNF6 allows differentiation of precursor cells in hepatocytes [114]. These data, in light of what was previously observed for HNF4α, could indicate HNF6 as a more efficient tumor suppressor in the presence of TGFβ in the tumor microenvironment.
Conclusions
The unsuccessful therapeutic approaches for the treatment of HCCs lead to focus the attention on molecular strategies consisting of the intra-tumoral delivery of specific proteins with tumor suppressor properties.
Recent literature data discussed above demonstrates the high potential of anti-cancer therapy based on the restoration of functions of epithelial/hepatocyte differentiation master regulators belonging to the LETF family, mainly HNF4α. These proteins are able to induce cellular reprogramming, coordinating extensive gene expression either through direct transcriptional regulation or by driving epigenetic changes on regulatory regions of target genes ( Figure 1). LETFs, indeed, can not only induce the terminal differentiation of tumor cells (and potentially of cancer stem cells) but they can also interfere with the EMT program responsible for tumor progression. These characteristics make LETFs promising tools for molecular therapy of HCC. The challenge is now the optimization of these tools through the creation of engineered molecules to take in account the microenvironmental cues that could influence the effectiveness of this therapeutic approach. Further studies will be necessary to achieve this result.
Author Contributions
AM and MT contributed to the design, writing and editing of the review; FB and AMC contributed to the writing and editing of the review. All authors read and approved the final manuscript.
Conflicts of Interest
The authors declare no conflict of interest. | 2016-06-10T08:59:46.098Z | 2015-10-30T00:00:00.000 | {
"year": 2015,
"sha1": "d0b4250712a39fd269d37e161f51cd4505fd92c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9721/3/4/325/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0b4250712a39fd269d37e161f51cd4505fd92c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
400519 | pes2o/s2orc | v3-fos-license | Increased risk for hepatitis C associated with solvent use among Canadian Aboriginal injection drug users
Background Solvent abuse is a particularly serious issue affecting Aboriginal people. Here we examine the association between solvent use and socio-demographic variables, drug-related risk factors, and pathogen prevalence in Aboriginal injection drug users (IDU) in Manitoba, Canada. Methods Data originated from a cross-sectional survey of IDU from December 2003 to September 2004. Associations between solvent use and variables of interest were assessed by multiple logistic regression. Results A total of 266 Aboriginal IDU were included in the analysis of which 44 self-reported recent solvent use. Hepatitis C infection was 81% in solvent-users, compared to 55% in those reporting no solvent use. In multivariable models, solvent-users were younger and more likely to be infected with hepatitis C (AOR: 3.5; 95%CI: 1.3,14.7), to have shared needles in the last six months (AOR: 2.6; 95%CI:1.0,6.8), and to have injected talwin & Ritalin (AOR: 10.0; 95%CI: 3.8,26.3). Interpretation High hepatitis C prevalence, even after controlling for risky injection practices, suggests that solvent users may form closed networks of higher risk even amongst an already high-risk IDU population. Understanding the social-epidemiological context of initiation and maintenance of solvent use is necessary to address the inherent inequalities encountered by this subpopulation of substance users, and may inform prevention strategies for other marginalized populations.
Background
In developed countries, sexually transmitted infections (STI) and bloodborne pathogens (BBP) disproportionately affect marginalized populations. In the United States, Australia, and Canada the combined impact of poverty, lack of access, and historical and systemic oppression have resulted in overrepresentation of indigenous populations in national HIV/AIDS and STI statistics, especially amongst females and youth [1][2][3][4][5][6]. Within Canada, injection drug users (IDU) account for a significant proportion of prevalent HIV and other BBP (such as hepatitis C [HCV]) infections, and are an especially important risk group sustaining endemicity of these pathogens within Aboriginal populations [4,[7][8][9]. However, despite progress in, and substantial efforts towards both understanding, and addressing BBP epidemics in Canadian Aboriginal populations [7], the transmission of some BBP, such as HIV and HCV, appear to be growing unabated [10][11][12]. This paradox has motivated researchers to examine heterogeneity in marginalized subpopulations, with the intention of finding and describing subpopulations that may be at particularly high risk of BBP transmission, as well as the environmental contexts within which they are embedded [13][14][15][16].
To this end, solvent abuse has been shown to be a particularly serious and destructive issue affecting Aboriginal populations in Canada, and elsewhere [17][18][19][20][21][22][23][24][25]. In North America, the lifetime use of solvents has been reported to be as high as 44% in some high-risk groups [26], with some studies finding the prevalence of lifetime use at 17% by the eighth grade [27]. Solvent use is a term broadly applied to the self-administered inhalation of a variety of volatile, psychoactive substances that are found in many common products, including gasoline and adhesive glue [24,28]. Solvent users have elevated rates of negative health outcomes including mental illness [29,30], damage to the central nervous system, heart and lungs [28,31,32], as well as mortality [32,33]. Contributing to its perniciousness, solvents are primarily legal and easily obtainable [18,34]. As well, multiple factors have been identified as being associated with solvent use, including age, sex, ethnicity, education level, co-existing alcohol or other substance use disorders, and child and physical abuse [35][36][37][38][39]. In youth, solvent use has been linked to broader societal issues such as higher school drop-out rates [40], delinquency (including criminal activity) [36,39] and family conflict [39,41]. Salient to this study, an association between chronic solvent use in adolescence and injection drug use among the most marginalized of populations has been demonstrated [20,[42][43][44].
On the treatment side, a particular defining feature of chronic solvent use is that it is typically associated with the most marginalized populations, with, for example higher levels of anti-social behaviour, trauma-exposure and psychiatric morbidities [19,24,45]. In response to the burgeoning need for Aboriginal-specific programs, Canada has over a dozen solvent abuse treatment centres spread across the country [46]. These centres operate under a continuum of interventions, including prevention, early intervention, residential treatment and environmental deterrence. Furthermore, evidence suggests IDU with a solvent use background have a "specific course of addiction" [39], often with much more detrimental outcomes, and a particular intransigency to treatment [39,47]. This "deviant group within a deviant group" has been recognized since the late 1970s [47], but is still poorly understood, relative to other IDU groups.
Despite the link observed between solvent use and IDU, and the disproportionate burden of both solvent abuse and STI/BBP infection in Aboriginal populations, there is little published research on solvent use among Aboriginal IDU. We therefore undertook this study to examine the association between solvent use and socio-demographic and drug-related risk factors in Aboriginal IDUs in Manitoba, Canada. We were also interested in examining the relationship between solvent use and injection of other types of illicit substances, as well as being infected with a BBP (i.e., HIV and HCV).
Study setting and survey instrument
The study setting and survey instrument have been described previously [48][49][50]. This was a cross-sectional survey of IDU in Winnipeg, Manitoba, Canada (pop. 675,000) conducted from December 2003 to September 2004. Recruitment was advertised at local community health centres, meeting places (as identified by key infor-mants) and word-of-mouth. Eligibility criteria included self-reported use of illicit injection drugs in the 6-month period prior to interview and having an age of 15 years or more. Potential participants made telephone contact with the study nurse, who administered all surveys in-person. Interviews took place in a private setting of the participant's choosing. An honorarium was provided to all study participants providing written or oral consent. The questionnaire was divided into three sections. The first section consisted of questions based on the respondent's own characteristics, the second elicited information on the respondent's egocentric network (i.e., the people with whom the respondent had regular contact with), while the third section asked questions on the respondent's IDU risk network. The first section was of primary interest for this study. The study design was approved by the Health Research Ethics Board of the University of Manitoba and the Winnipeg Regional Health Authority Research Review Committee.
Measures
The outcome measure in this study was a binary variable describing solvent use, which was derived from a positive answer for "Gasoline/Solvents" to the survey item "In the last 6 months, which of the following drugs have you used without injecting?" The study sample of IDUs was subsetted to only individuals who self-identified as Aboriginal, and included those who identified as 'First Nations' or 'Metis'. Variables were grouped into four categories: socio-demographic, injection-related behaviours, other drug use and BBP status. Socio-demographic variables included: age, which was categorized as 15-29, 30-39, and 40 years or more; education, which was coded as 'dropped out less than grade 12' or 'grade 12 or higher'; and place of birth, which was coded as 'born inside Manitoba' or 'born elsewhere'. Injection-related behaviours included: locales where drugs were injected (in the last 6 months), and this list included their own house, a family members' or friends' residences, an empty house, a shelter/hostel, hotel, shooting gallery and on the street; sharing needles (ever and in the last 6 months); sharing other injection equipment; injecting someone as a service; injecting someone as a favour; and ease of obtaining needles. The time frame for the last four questions was 6 months.
Participants were asked which drugs they injected most frequently and finally, in terms of BBP infection, HIV and HCV status was assessed using venous blood samples, tested at Cadham Provincial Laboratory (Winnipeg, MB). Specimens were screened for HCV and HIV with AxSYM HCV (Abbott, Mississauga, ON) and AxSYM HIV1/2 gO (Abbott, Mississagua, ON), respectively. Presumptive positives were confirmed for HCV with Chiron HCV 3.0 RIBA (Ortho-Clinical Diagnostics, Markham, ON). Pre-sumptive HIV positive specimens were confirmed by western blot (BioRad, Montreal, QC).
Statistical methods
Associations between solvent use and variables of interest were assessed using χ 2 tests. Variables that were significant at the p < .20 level were included in multivariable logistic regression analysis. A parsimonious model was desired, so therefore, with the exception of sex (which was forced into the model to adjust for its effects), a backwards stepwise regression procedure was used to eliminate variables that were not significant at the p < .05 level. Odds ratios (OR) and their 95% confidence intervals (95% CI) are reported for univariate and multivariable analyses. Multicollinearity of the final model was assessed using VIF and tolerance statistics. Stata version 9 was used in performing all analyses [51].
Results
A total of 272 IDU identified as Aboriginal. An additional 6 that identified as transgendered were excluded from the analyses due to small numbers, leaving a total sample size of 266. Overall, 44 (16.5%) of the study sample reported solvent use in the last 6 months. Table 1 displays a comparison of characteristics of solvent and non solventusing IDU. Broadly speaking, the two groups differed significantly, at least at the p < .05 level, by age, injection locations, injection risk behaviours, type of drugs injected and BBP status (Table 1).
Socio-demographic, injection-related and BBP status characteristics
Specifically, solvent-using IDU tended to be younger in age (p < .001) with an average age of 31.6 years (SD: 7.5), compared to non-solvent-using IDU, who averaged 36.3 years of age (SD: 9.1). Solvent users were more likely to have reported injecting in a family house 46,7.58). Solvent users were more likely to be HIV positive than their non-solvent using counterparts (17.5% versus 8.3%), but this was not statistically significant at the p < .05 level (p = .076).
Multivariable analysis
After backwards elimination, the following variables remained in the final logistic regression model (Table 2): HCV status (p = .016), sharing needles in the last 6 months (p = .048), Talwin & Ritalin injection (p < .001) and age (p < .001), adjusted for sex. All variables remained significant if sex was removed from the model.
Discussion
This study examined the association between solvent use in Aboriginal IDU and socio-demographic factors, drugrelated risk factors, use of other illicit substances and BBP infection. We found that after adjusting for other variables including sex, solvent use was significantly associated with Talwin & Ritalin injection, HCV status and age in this population.
Some important limitations of the study should be stated at the outset. First and foremost, ours was a crosssectional study, and a causal linkage between solvent use and injection drug use cannot be inferred from the data. Although both likely share determinants, our data are insufficient to establish causality. Aboriginal individuals in Canada face a combination of socially and structurally determined vulnerabilities, including high rates of entrenched poverty, unemployment, homelessness and sexual and physical abuse [2,52,53]. Many of these factors stem from a history of colonization, oppression, systemic racism and discrimination in Canadian society and have resulted in Aboriginal Canadians having unequal access to a variety of resources [2,54]. Thus, the perniciousness of both solvent and injection drug use within Aboriginal populations is more likely a result of these determinants. Second, solvent use was measured broadly. The measure used was not precise enough to discriminate between chronic and casual use. Similarly, different types of solvents were not captured in this study. Third, since a sampling frame was not possible to construct for this marginalized and hidden population, the sample was not randomly generated and may not be representative of Aboriginal IDUs in other settings, or in Winnipeg. Fourth, social desirability bias, or high non-response rate is always an issue with self-reported data; however, it is likely that this would have served to underestimate associations toward the null. Finally, the sample size was relatively small and thus may have not had power to detect significant findings.
Previous studies in Winnipeg have reported Talwin & Ritalin injection as being strongly associated with both Aboriginal ethnicity [50,55] and high HCV prevalence [49]. That HCV infection is three times more likely in the population of solvent-using Aboriginal IDU, after controlling for Talwin & Ritalin injection and risky injection practices, strongly suggests the existence of pockets of higher risk even amongst an already high-risk subpop- ulation [39,47]. It was also demonstrated that these qualitatively distinct 'higher-risk' groups can be distinguished when both injectable and non-injectable drug use is considered.
The relatively low prevalence of both HIV and HCV among IDU in our geographic setting has motivated researchers to ask what role, if any, public health responses in Winnipeg may have contributed to lower prevalence [56]. Both HIV and HCV prevalence in the subset of solvent-using IDU are relatively higher than other IDU in our sample; and at 18% and 81% respectively, are in closer alignment with the prevalence observed in other jurisdictions [57,58]. This dichotomy in prevalence reinforces the exceptionally high risk faced by solvent-using IDU, and their real or potential ability to be missed by what otherwise may be an effective public health response. This higher-risk group is particularly relevant given the recent attention paid to especially high rates of HIV in Aboriginal populations in central Canada [11,12], and serve to illustrate that BBP epidemics in Canada are not homogeneous.
Solvent use is an issue where there are no easily-identifiable solutions [23,24]. Solvent users are at the bottom of a drug-using hierarchy, in terms of perception by other substance users and practitioners, and by the sheer volume of their social and personal challenges [29,39,42,47]. Thus, given the already difficult lifestyle and behavioural issues related to injection drug use [58,59], a combination of solvent use and injection drug use within Aboriginal populations may present considerable, and specific challenges for treatment [39,47]. For example, although there is well-established literature on the effectiveness of harmreduction efforts such as needle-exchange programs in curtailing the spread of BBPs [60,61], the constituents of an equivalent and appropriate harm reduction strategy for solvent users have not been well articulated in the literature [24], although practical advice may include using solvents in groups, and using clean rags or sponges. As well, outreach efforts to these populations may be unduly hampered by the considerable stigma attached to chronic solvent use. Similar to recent Canadian research demonstrating that IDU who also smoked crack cocaine were at higher risk of HIV seroconversion [10], perhaps an especially chaotic lifestyle is contributing to the higher HCV prevalence in our solvent-using subpopulation.
Understanding outlier populations
As Kuller has suggested that understanding epidemics in "outlier" populations may have substantial benefits in unpacking transmission dynamics in more mainstream populations [62], a deeper examination of this, and similar subpopulations is warranted. Thus, we submit that understanding the exogenous factors that contribute to solvent use in IDU may result in better understanding of marginalized subpopulations in general, particularly with respect to understanding the trajectory of use [63]. For example, it has been recognized that solvent use is typically a group activity [23,24,26]. The natural consequence is the tendency to form closed networks [64], in this case comprised of fellow solvent-using IDU. This may be particularly true in our study population of Aboriginal IDUs, since individuals have been shown to form more cohesive structures according to ethnicity [65]. At the same time, the near ubiquity and accessibility of sources of solvents and inhalants is clearly a key contributor to their abuse [66]. Recent programs that seek to address solvent use in adolescent Aboriginal Canadians through improving individual-level coping strategies recognize that without multi-level support structures (e.g. family, community, environment) in place, individual recovery is likely to fail [25]. Other researchers have found that strong peer group sanctions against solvent use, in concert with messages concerning the dangers of solvent use were protective against lifetime and current use of solvents [23]. Thus, finding ways to identify and engage with solvent users and their peers may have application with other hidden and marginalized populations. Along this line, some authors have suggested that solvent use may be a marker for an inherently more challenging type of substance user [19,45]. Thus, it may be useful to understand to what extent the actual choice of solvent use is a proxy for characteristics that distinguish the most marginalized of subpopulations. Understanding the populations that become chronic abusers of easy-to-obtain substances [67]. Here, we have demonstrated the practicality of examining IDU in their use of both injection and non-injection drugs. At the treatment level, this perspective highlights the importance of treating two or more qualitatively distinct addictions concurrently [68,69]. For example, Stenbacka et al. demonstrated that opiate-injecting IDU undergoing methadone maintenance therapy (MMT) were more likely to relapse if they had co-occurring alcohol abuse issues [69]. Secondly, the clustering of solvent and Talwin & Ritalin use suggests that the use of either is driven, to a certain degree, by opportunism. Although our data cannot provide a definitive answer, it would be useful to know under what circumstances IDU resort to inhaling solvents. Assuming inhalation is their 'fallback' method, and philosophically similar to MMT, perhaps a reliable supply of other injectable or non-injectable drugs would deter this subpopulation of IDU from using solvents, and thus prevent some of the more serious neurological and cognitive deficits associated with long-term chronic use [70,71].
Conclusion
In conclusion, although addressing social or peer group norms has long been advocated as part of an effective prevention and treatment strategy for IDU, perhaps structural-level interventions are especially indicated for solvent-using Aboriginal IDU. At a time when rates of HIV and other BBPs are escalating in Canadian Aboriginal populations, studies like this one can help inform targeted strategies, as well as motivate harm reduction research in very marginalized populations. The strong socially-constructed vulnerabilities of Aboriginal populations, the illegality of injection drug use, the obduracy of solvent use to traditional regulation and control, and the extreme marginalization of solvent users may be interacting to create a 'perfect storm' for those IDU already infected, and those at high risk for infection to slip through the cracks in public health systems. | 2016-05-12T22:15:10.714Z | 2010-07-19T00:00:00.000 | {
"year": 2010,
"sha1": "3e042536ae4f8c80d00ca34985ca4edd847aa28e",
"oa_license": "CCBY",
"oa_url": "https://harmreductionjournal.biomedcentral.com/track/pdf/10.1186/1477-7517-7-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e042536ae4f8c80d00ca34985ca4edd847aa28e",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53532655 | pes2o/s2orc | v3-fos-license | On Khovanov-Seidel quiver algebras and bordered Floer homology
We discuss a relationship between Khovanov- and Heegaard Floer-type homology theories for braids. Explicitly, we define a filtration on the bordered Heegaard-Floer homology bimodule associated to the double-branched cover of a braid and show that its associated graded bimodule is equivalent to a similar bimodule defined by Khovanov and Seidel.
Introduction
The low-dimensional topology community has been energized in recent years by the introduction of a wealth of so-called homology-type invariants. These invariants are defined by associating to a topological object (for example, a link or a 3-manifold) an abstract chain complex whose quasi-isomorphism class-hence, homology-is an invariant of the object.
One obtains such invariants from two apparently unrelated points of view: (1) algebraically, via the higher representation theory of quantum groups, and (2) geometrically/analytically, via symplectic geometry and gauge theory.
Although the invariants themselves share a number of formal properties, finding explicit connections between the two viewpoints has proven challenging.
A striking success in this direction is a result of Ozsváth and Szabó relating the Z/2Z versions of Khovanov homology and Heegaard Floer homology: Theorem 1.1. [42] Let L ⊂ S 3 be a link and L ⊂ S 3 denote its mirror. There exists a spectral sequence whose E 2 term is Kh(L), the reduced Khovanov homology of the mirror of L, and whose E ∞ term is HF (Σ(L)), the Heegaard-Floer homology of the double-branched cover of L.
This result has generated applications in a number of directions (see, e.g., [43], [53], [8]). It also served as inspiration for Kronheimer and Mrowka's construction of an analogous spectral sequence from Khovanov homology to a version of instanton knot homology, yielding a proof that Khovanov homology detects the unknot [29].
The aim of the present paper is to move toward a more "atomic" understanding of the Ozsváth-Szabó spectral sequence and its sutured generalizations ( [44,14,13,15]). In particular, viewing a link in S 3 as the closure of a braid, we can ask whether there are appropriate Khovanov-type (algebraic) and Heegaard-Floertype (geometric/analytic) invariants associated to braids such that the Ozsváth-Szabó spectral sequence emerges as an algebraic consequence of a relationship between these invariants.
Such a description would not only be of theoretical interest. Ozsváth-Szabó's original description of the above spectral sequence involves holomorphic polygon counts in Heegaard multi-diagrams. Since these counts are tricky to carry out in practice, finding ways to perform them combinatorially should prove valuable, especially in light of subsequent work of Baldwin [7] (see also L. Roberts [45]) proving that the terms of the Ozsváth-Szabó spectral sequence are themselves link invariants.
We should at this point remark that recent work of Lipshitz-Ozsváth-Thurston, in [35] and its sequel, does precisely this. In addition, Szabó [52] has constructed a combinatorial filtration on the Khovanov cube of resolutions associated to a link diagram that he conjectures yields the original Ozsváth-Szabó spectral sequence.
In the present paper, we address a slightly different question from a substantially different direction. First, we focus not on the original Ozsváth-Szabó spectral sequence but rather on (a direct summand of) one of its sutured generalizations [44,13]. Second, we take as our starting point a paper of Khovanov-Seidel [24], which explores a concrete instance of Kontsevich's homological mirror symmetry conjecture [27]. The constructions found there, when combined with work of the first author [4], lead naturally to a new view on the filtered complexes appearing in [44,13].
Explicitly, given a braid σ ⊂ D 2 × I, we consider the closure of the braid, not in the three-ball but in the solid torus (viewed as a product sutured annulus, A × I). Associated to the resulting annular link are Khovanov-type and Heegaard-Floer-type invariants connected by a sutured spectral sequence [2,44,13] that splits along an extra grading measuring "wrapping" around the S 1 factor. 1 In [5], building on work in [33], we obtain a similar spectral sequence in the "next-to-top" graded piece as the Hochschild homology of a filtered A ∞ bimodule associated to the original braid, σ.
The purpose of the present paper is to give an explicit combinatorial construction of this filtered A ∞ bimodule. Informally, the resulting spectral sequence interpolates between the "open" Khovanov-and Heegaard-Floer-type invariants of a braid σ ⊂ D 2 × I just as the sutured spectral sequence interpolates between the analogous "closed" invariants of its closure,σ ⊂ A × I.
More precisely: (1) On the algebraic side, we show how to use ideas of Khovanov-Seidel in [24] to construct an A ∞ bimodule, M Kh σ , via Yoneda imbedding of a distinguished collection of objects in the derived category of a quiver algebra.
(2) On the geometric/analytic side, we use the bordered Floer homology package of Lipshitz-Ozsváth-Thurston in [32,33] to construct an A ∞ bimodule, M HF σ , the 1-strand CFDA bimodule associated to the mapping classσ obtained as the double-branched cover of σ ⊂ D 2 × I.
Letting 1 denote the identity braid of the same index as σ, we prove: Theorem 6.1. There exists a filtration on M HF σ whose associated graded bimodule is quasi-isomorphic, as an ungraded A ∞ bimodule over gr(M HF 1 ) = M Kh 1 , to M Kh σ .
In particular, for each braid there exists a spectral sequence connecting the Khovanov-Seidel (algebraic) bimodule to the Lipshitz-Ozsváth-Thurston (geometric/analytic) one. Moreover, these "open" spectral sequences can be defined without reference to holomorphic curves. In fact, our construction is based on a remarkably simple toy model (Lemma 5.3): a filtered complex interpolating between the cohomology of S 1 and the cohomology of S 0 (both over Z/2Z) coming from a Z/2Z-equivariant cochain complex for S 1 . This toy model was, in turn, inspired by work of Seidel and Smith [50].
We pause here to emphasize some key points. First, the algebraic objects appearing in [24] do themselves admit a geometric interpretation in terms of the Fukaya category of a certain Lefschetz fibration (cf. Section 3.5). Those readers familiar with [48] may therefore prefer to perform the Section 3 calculations geometrically. We have opted instead to work entirely in the algebraic setting, using symplectic geometry only as motivation. Although this has surely increased the paper's length, we hope it has simultaneously increased its accessibility to nongeometers.
This accessibility is essential, as the algebraic version of the Khovanov-Seidel construction has a beautiful representation-theoretic interpretation. Explicitly, the Khovanov-Seidel algebra is a special case (for k = 1) of a family of algebras A k,n−k , introduced by Chen-Khovanov [12] and independently by Stroppel [51], giving rise to a categorification of the U q (sl 2 ) Reshetikhin-Turaev invariant for tangles. These algebras can also be identified with endomorphism algebras of projective generators of certain blocks O k,n−k of category O. We conjecture that Theorem 6.1 admits a generalization which, for every n-strand braid σ, provides a relationship between the k-strand part of the Lipshitz-Ozsváth-Thurston bimodule associated to σ and a Khovanov-type bimodule defined over the Ext-algebra of the direct sum of all standard A k,n−k -modules.
We end by remarking that the construction of our filtration required a choice of a common "basis" of generators for the relevant Fukaya categories. One natural choice is made in [24] (corresponding in the geometric setting to Lagrangians where all but one is compact and in the algebraic setting to Luzstig's canonical basis for a tensor product representation), while another equally natural choice is made in [32], as reinterpreted in [4] (corresponding to non-compact Lagrangians and the standard basis for a tensor product representation). We work with the latter, noncompact basis because both (k = 1) algebras in the noncompact case are formal (see Lemma 3.12 and [4,Prop. 3.6]) while the bordered Floer algebra corresponding to the compact basis is not [48,Chp. 20], [31].
The paper is organized as follows: In Section 2, we establish notation and collect a number of useful definitions and elementary algebraic results.
In Section 3, we describe the topological input needed for the algebraic constructions in the remainder of the paper. After reviewing the key points in [24], we proceed to the construction and description of • an algebra, B Kh , associated to a marked disk D m equipped with a specific basis of curves and • a module, M Kh σ , associated to each braid σ, decomposed as a product of elementary Artin generators. We conclude the section with a brief discussion of the Fukaya-theoretic interpretation of B Kh and M Kh σ . In Section 4, we turn to the construction and description of the analogous bordered Floer algebra B HF and bimodules M HF σ , using the same topological input.
In Section 5, we describe a natural filtration on B HF whose associated graded algebra is isomorphic to B Kh . Our construction is based on a simple "toy model" (Lemma 5.3).
In Section 6, we describe a filtration on M HF σ whose associated graded homology bimodule is quasi-isomorphic to M Kh σ . We proceed by choosing a decomposition σ = σ ± k1 · · · σ ± kn of σ as a product of elementary Artin generators, explicitly constructing a filtration on M HF In Section 7, we describe an example highlighting the nontriviality of the filtration on M HF σ . 1.1. Acknowledgements. We are grateful to Tony Licata, Robert Lipshitz, Peter Ozsváth, Catharina Stroppel, and Dylan Thurston for a great number of interesting conversations, and to the MSRI semester-long program on Homology Theories of Knots and Links for making these conversations possible. We would also like to thank Joshua Sussan for bringing to our attention that some of the algebraic results of Section 3 (in particular, Lemma 3.12) were independently obtained by Angela Klamt and Catharina Stroppel in [25] and [26]. Many thanks are also due to the excellent referee and editor, whose insightful suggestions greatly improved the manuscript. Finally, we are indebted to John Baldwin, who helped us find the example described in Section 7.
Algebraic preliminaries
In this section, we establish some basic facts about filtered A ∞ algebras and modules. We assume throughout that we are working over the field F = Z/2Z. In addition, many of the spaces we discuss will be graded either by Z, in which case we say it is graded, or by Z 2 , in which case we say it is bigraded. The (co)homological grading always appears first.
and k 1 , k 2 ∈ Z, then V [k 1 ]{k 2 } will denote the vector space whose first (homological) grading has been shifted down by k 1 and whose second (internal) grading has been shifted up by k 2 . 2 Explicitly, We omit the standard definitions of A ∞ algebras, modules, morphisms and homotopies, instead referring to Keller's expository papers: [20], [21]. Other excellent references are Seidel's book: [48] (though the reader should be warned that Seidel's ordering conventions (cf. Eqn 1.1) for A ∞ morphisms differ from ours), the thesis of Lefèvre-Kasegawa [30], and Chapter 2 of [32]. All A ∞ modules we consider will be over homologically unital algebras (c-unital, in the terminology of [48]), and morphisms between homologically unital algebras must be homologically unital.
Remark 2.2. The algebraically defined modules we study here are, in fact, strictly unital (cf. Remark 2.5), but the geometrically defined ones need not be.
Let n ∈ Z + and n 1 , n 2 ∈ Z ≥0 . We shall use the notation m n to refer to the nth structure map m n : of an A ∞ algebra A and the notation m (n1|1|n2) to refer to the (n 1 |1|n 2 ) structure map The difference in shift conventions for homological versus internal gradings is unfortunate, but standard in the literature. In particular, they coincide with those in [24]. The reader should be warned, however, that [32] uses a different convention, since their differential maps decrease rather than increase homological grading.
of a bimodule M admitting a left (resp., right) A ∞ action by the A ∞ algebra A (resp., B). If A is ungraded but otherwise satisfies all of the conditions of an A ∞ algebra, we call A an ungraded A ∞ algebra.
A graded (resp., ungraded) A ∞ algebra satisfying m n = 0 for all n > 2 is a differential graded algebra (dga) (resp., a differential algebra) with differential ∂ := m 1 and multiplication m 2 . The terminology is completely analogous for graded and ungraded A ∞ and differential modules.
If M and N are A ∞ bimodules, we will refer to the map associated to an A ∞ morphism f as the "(n 1 |1|n 2 ) term of f ." In addition, we will use the terminology "(n 1 |1|n 2 ) A ∞ relation" to refer to the A ∞ relation corresponding to n 1 left inputs and n 2 right inputs. For example, the (1|1|0) A ∞ relation for an A ∞ morphism f : M → N is given by: If f (n1|1|n2) = 0 for all n 1 , n 2 > 0, then we say that f = f (0|1|0) : A → B is a strict morphism of A ∞ modules. In particular, a strict morphism f : A → B of differential (graded) algebras is a chain map intertwining the multiplication, m 2 .
An A ∞ morphism f is said to be a quasi-isomorphism if f 1 induces an isomorphism on homology.
Homological perturbation theory allows one to transfer A ∞ structures along certain morphisms. Although the situation of particular interest to us is the transfer of an A ∞ structure along a chain homotopy equivalence p : A → H * (A) as in [19,40,28], such a transfer can be performed in much greater generality. See [39] (and the related discussion in [48,Sec. (1i)]). A nice account is also given in [9, Thm. 2.1]. The tree formulas for this transferred structure are summarized in the following proposition: be a contraction of a chain complex, A, onto its homology, H * (A). In other words, p and ι are chain maps and h is a chain homotopy satisfying: Suppose A is further endowed with a (not necessarily unital) A ∞ structure extending the differential structure, i.e., multiplication maps is given by where the sum ranges over all planar rooted trees T with n leaves and m T n is defined by applying the T -shaped diagram with (1) leaves labeled with ι, (2) interior edges labeled with h, (3) vertices labeled with the multiplication maps m i in the algebra A, and (4) root labeled with p to an element of (H * (A)) ⊗n .
See Figure 1 for an enumeration of all such rooted trees T specifying the multiplication m n when n = 4. The resulting "transferred" A ∞ structure on H * (A) is unique (independent of the choice of p, ι, h) up to non-unique A ∞ isomorphism.
is constructed exactly as described in Proposition 2.3, where the leaves and root of each tree have been labeled with H * (M) rather than H * (A), where appropriate.
Remark 2.5. The condition ιp = Id + ∂h + h∂ in the statement of Proposition 2.3 is all that is needed to transfer the A ∞ structure from A to H * (A), while the extra condition pι = Id ensures that the two structures are quasi-isomorphic. Moreover, although Proposition 2.3 as stated is a result about non-unital A ∞ algebras (and non-unital modules over them), in the cases of interest in the present work (specifically, Lemmas 3.12 and 3.16), Proposition 2.3 yields quasi-isomorphisms of strictly unital algebras (resp., modules).
Note also that if H * (A) is finite-dimensional, the condition pι = Id is a consequence of the condition ιp = Id + ∂h + h∂, hence may be omitted. Definition 2.6. An A ∞ structure on H * (A) constructed as in Proposition 2.3 is called a minimal model of A. An A ∞ algebra is said to be formal if a minimal model can be chosen so that m n = 0 for all n > 2.
Henceforth, whenever we refer to the minimal model, H * (A), for A an A ∞ algebra, we shall always assume it has been endowed with the structure provided by Proposition 2.3 for suitable maps ι, p, h. Definition 2.7. Let A be a homologically unital A ∞ -algebra. The derived category D ∞ (A) is the category with objects homologically unital A ∞ -modules (left, right, or bi-, depending on the context) and morphisms A ∞ -homotopy classes of A ∞ -morphisms. Remark 2.8. Since every A ∞ quasi-isomorphism has an inverse up to homotopy (see [10,Lemma 10.12.2.2]), passing to the derived category has the effect of making A ∞ quasi-isomorphisms invertible. Definition 2.9. A (graded or ungraded) filtered A ∞ algebra A is a (graded or ungraded) A ∞ algebra equipped with a sequence of subsets, for i ∈ Z: that are compatible with the A ∞ structure in the following sense: If m n = 0 for all n > 2, A is a (graded or ungraded) filtered differential algebra. (Graded or ungraded) filtered A ∞ modules and filtered differential modules are defined analogously.
Note that the compatibility of the filtration with the multiplicative structure ensures that if A is a filtered A ∞ algebra, the associated graded algebra i F i /F i−1 is a well-defined (graded or ungraded) A ∞ algebra, and if M is a filtered A ∞ module over a filtered A ∞ algebra A, then the associated graded module i F i /F i−1 is a well-defined A ∞ module over the associated graded algebra of A. Definition 2.10. A filtered A ∞ algebra A (resp., module M) is said to be bounded if there exist n < N ∈ Z such that 0 = F n and A = F N (A) (resp., M = F N (M)).
Notation 2.11. If M is a filtered A ∞ module and k ∈ Z, M{k} will denote the filtered A ∞ module whose filtration has been shifted by k. Explicitly, A filtration on an A ∞ algebra (resp., module) induces a spectral sequence in the standard way, and if the filtered complex is bounded this spectral sequence converges in a finite number of steps. Furthermore, each page of the corresponding spectral sequence has the structure of an A ∞ algebra (resp., module), by Proposition 2.3. We will call the homology of the associated graded complex, i∈Z F i /F i−1 , the associated graded homology algebra (resp., the associated graded homology module) and the homology of the total complex (i.e., the E ∞ page of this spectral sequence) the total homology algebra (resp., the total homology module).
If M is a filtered left A ∞ A-module, and N is a filtered right A ∞ B-bimodule, then M ⊗ N inherits a filtration (and, hence, the structure of a filtered A ∞ A-B bimodule in the sense of Definition 2.9) via: Similarly, the A ∞ tensor product of filtered A ∞ bimodules naturally inherits the structure of a filtered A ∞ bimodule: Lemma 2.12. Let M, N be two filtered A ∞ bimodules over a filtered A ∞ algebra A. Then the A ∞ tensor product, with underlying vector space: inherits the structure of a filtered A ∞ bimodule as follows: contributing to the differential on the complex all respect the filtration in the sense of Definition 2.9. The same is true for the higher multiplications on the complex, for the same reason.
Definition 2.14. Let A be a filtered A ∞ algebra, and f : M → N a filtered A ∞ morphism between filtered A-modules M and N. Let m M (n1|1|n2) (resp., m N (n1|1|n2) ) denote the A ∞ multiplication maps for M (resp., for N).
Then the mapping cone of f , denoted M C(f ), is the filtered A ∞ A-module with underlying F-vector space M[1] ⊕ (N), A ∞ multiplication maps: . and filtration given by: The following lemma will be useful in the proof of Theorem 6.1. Proof. The well-defined chain map (where in the above I := i + j 1 . . . + j n + k), is an isomorphism of chain complexes. This map is well-defined, since any other representative, . . ⊗ [a n ] ⊗ [y] will differ from x ⊗ a 1 ⊗ . . . ⊗ a n ⊗ y by an element in F I−1 , by the definition of the filtration on M ⊗ A N. Similarly, we send an equivalence class [x ⊗ a 1 ⊗ . . . ⊗ a n ⊗ y] ∈ gr(M ⊗ A N) to the uniquely-specified equivalence class ∈ gr(M) ⊗ gr(A) gr(N).
These maps are well-defined, and the differentials on the two sides can be easily seen to agree. Furthermore, the differentials on gr(M ⊗ A N) and gr(M) ⊗ gr(A) gr(N) agree, by the same argument above applied to the image of the differential of a representative x ⊗ a 1 ⊗ . . . ⊗ a n ⊗ y ∈ M ⊗ A N.
2.1. Formality of dg algebras and modules. The following technical lemmas provide sufficient (but not necessary) conditions for formality of an A ∞ module. Lemma 2.16. Let A be a differential (graded) algebra (resp., let M be a differential (graded) module over A), and let ι, p, h be maps satisfying the conditions in Proposition 2.3. If, in addition, then A is formal (resp., M is formal).
Proof. In the interest of brevity, we give the argument for the case of A a differential (graded) algebra, leaving the completely analogous proof in the case of M a differential (graded) module to the reader.
Each tree T contributing to the definition of for n > 2 yields the 0 map, since each such tree T involves a product of terms in A, at least one of which is either: for n > 2 (if T is not trivalent). In both cases, such a term is 0 in A by assumption, hence the corresponding map is 0, implying formality of A.
To see that ι : A → H * (A) is a strict quasi-isomorphism, we refer to [9, Thm. 2.1], which tells us that ι n can be defined recursively as Assumptions (1) and (2), combined with the assumption that m A r = 0 for r > 2, now allow us to conclude inductively that ι n = 0 for n ≥ 2, as desired. Proof. We give the proof in the case that M is a differential (graded) bimodule over A. If Assumption (2) holds only under left (resp., right) multiplication, then p M will be a strict quasi-isomorphism of left (resp., right) A-modules.
Since A is an algebra, m A n = 0 unless n = 2, and A is trivially A ∞ isomorphic to its homology. Choosing ι A : H * (A) → A and p A : A → H * (A) to be the identity morphism, and h A : A → A to be the zero morphism, we now claim that any tree T contributing to the definition of is zero if n 1 + n 2 + 1 > 2. This follows because: • If T is trivalent then it corresponds to a summand of the form p M •h M (m), since Im(h M ) is an A-bimodule. Such a term is zero by Assumption (1). • If T is not trivalent then it involves a product with at least one term of the form: . . ⊗ ι)) for n 1 + n 2 + 1 > 2 (resp., n > 2), which is zero since M is a dg module (resp., since A is an algebra). To see that p M is a strict quasi-isomorphism, we again appeal to [9, Thm. 2.1], which gives recursive definitions for (p M ) (n1|1|n2) in terms of p, m, and an auxiliary morphism h [n1|1|n2] , defined recursively in terms of p, ι, h.
Khovanov-Seidel Hom algebras and bimodules
In this section, we construct dg bimodules following Khovanov-Seidel in [24]. We begin by describing the topological data needed for the construction of both the Khovanov-Seidel bimodules and their bordered Floer analogues (described in Section 4).
We emphasize that although we have chosen to describe the Khovanov-Seidel objects from a purely algebraic viewpoint, they also admit a beautiful Fukayatheoretic description (cf. Section 3.5). Readers familiar with [24,Sec. 6] and [48,Chp. 20] will likely benefit from keeping this geometric picture in mind.
3.1. Topological data: Basis of curves. Let D m denote the unit disk in the complex plane, equipped with a set, with the unit disk in C. By convention, the distinguished point, labeled by a * , at −1 ∈ ∂D m , is the left endpoint for all ∂-admissible curves in D m .
A ∂-admissible curve is a particular type of admissible curve in the sense of [24,Sec. 3b]. Two ∂-admissible curves c 1 and c 2 are said to be isotopic if there is a homotopy between c 1 and c 2 through ∂-admissible curves. • c 0 and c 1 intersect transversely, If we, furthermore, specify a lift of each curve, c j ∈ B, to a bigraded curve, c j , we say that we have a basis, Unless otherwise specified, from this point forward whenever we write that B is a basis, we shall always mean that B is a basis of ∂-admissible bigraded curves in normal form in D m . Two bases B = { c 0 , . . . c m } and B = { c 0 , . . . , c m } are said to be equivalent if there exists an isotopy c i → c i for each i = 0, . . . , m through ∂-admissible bigraded curves in normal form.
As in [24], we let G = Diff(D m , ∂D m ; ∆) denote the group of diffeomorphisms f of D m satisfying f | ∂Dm = Id and f (∆) = ∆ and note that there is a canonical identification of π 0 (G) with B m+1 , the Artin braid group on m + 1 generators. Under this correspondence, (isotopy classes of) ∂-admissible curves are sent to (isotopy classes of) ∂-admissible curves. Moreover, an (equivalence class of) basis B is sent to an (equivalence class of) basis σ( B), after suitably reordering the curves in σ( B).
3.2.
The ring A m and a braid group action on D b (A m ). In [24], Khovanov-Seidel associate to a braid, σ ∈ B m+1 , a bimodule over a quiver algebra, A m (defined below). In this subsection, we explain how their construction yields a family of algebras and bimodules, one for each choice of basis. Our end goal is the construction of a particular algebra, B Kh , and a bimodule, M Kh σ over B Kh , from the data of a particular such basis, Q.
We begin by reviewing the original construction of Khovanov-Seidel in [24]. Let Γ m be the oriented graph (quiver) whose vertices are labeled 0, . . . , m and whose edges are shown in Figure 3. Recall that, given any oriented graph Γ, one defines its path ring as the vector space over F freely generated by the set of all finite-length paths in Γ, where multiplication is given by concatenation, and the product of two non-composable paths is set to 0. The ring A m is then defined as a quotient of the path ring of Γ m by the collection of relations for each 0 < i < m. In the above, following [24], we have labeled each path in Γ m by the complete ordered tuple of vertices it traverses. So, for instance, (i − 1|i|i + 1) denotes the path that starts at vertex i − 1, moves right to i, then right again to i + 1. The path ring of Γ m is further endowed with a grading by setting deg(i) = deg(i|i + 1) = 0 and deg(i|i − 1) = 1 for all i. This grading [24], we denote A m (i) (resp., (i)A m ) by P i (resp., i P ). Note that P i (resp., i P ) is the set of all paths ending at i (resp., beginning at i).
To streamline notation, we henceforth assume that we have fixed m ≥ 0 ∈ Z, and let A denote the algebra A m .
Khovanov-Seidel go on to associate to each braid σ ∈ B m+1 an element of D b (A), the bounded derived category of A-bimodules, by associating to each elementary Artin braid generator σ ±1 i (pictured in Figure 4) a dg bimodule M σ ± i and to each braid, σ := σ i1 ± · · · σ i k ± , decomposed as a product of elementary braid words, the dg bimodule They then verify that any two decompositions of σ as a product of elementary Artin braid generators give rise to quasi-isomorphic complexes, and hence M σ gives rise to a well-defined element in D b (A). Definition 3.6. Let (C 1 , ∂ 1 ), (C 2 , ∂ 2 ) be two bounded dg left modules over an algebra A. Then the Hom complex of the pair (C 1 , C 2 ), denoted Hom A (C 1 , C 2 ), is the bounded complex whose generators are left module morphisms, F : C 1 → C 2 , and whose differential, D, is given by is a dga, with multiplication given by composition of A-module morphisms. We will refer to m i,j=0 Hom A (L( c i ), L( c j )) as the Hom algebra associated to B. We focus in the present paper on the Hom algebra associated to the basis Q = { q 0 , . . . , q m } given by (a particular lift of) the collection of curves pictured in Figure 5. 4 Applying the construction of [24, Sec. 4a], we associate to q j the dg module: where the differential map " ·(i−1|i)" denotes "right multiplication by the element (i−1|i)." By fixing a lift of the tangent vector to the curve q 0 at a point near 0 ∈ ∆ and declaring this lift to correspond to bigrading (0, 0), we obtain a "canonical" bigrading on Q j satisfying the property that the bigrading of the idempotent (i) ∈ P i is (i, 0). 5 Notation 3.8. We shall denote by B the Hom algebra associated to Q: and by B Kh its homology, H * (B), considered as an A ∞ algebra via the construction in Proposition 2.3.
We will eventually be interested in D ∞ (B Kh )-in particular, a braid group action on this category-so we now devote some time to describing the structure of B and B Kh .
Suppose further that P i0 {s 0 } is in (co)homological grading 0. Then we will use the notation I R to denote the following bounded complex of elementary projective right A-modules: in R I is given by right multiplication by a path γ ∈ A, then the corresponding map ij P ← ij+1 P in I R is given by left multiplication by γ. Proof. Each element φ ∈ Hom A (R I , S J ) can be decomposed as a sum of left Amodule maps φ k, : P i k {s k } → P j {s }, each of which is uniquely determined by the image, φ k, (i k ), of the idempotent, (i k ). We therefore obtain an isomorphism To see that the Hom complex differential D(φ) := φd I + d J φ on the left matches the tensor product differential on the right, we simply note that if φ = k, φ k, ∈ Hom A (R I , S J ), then for each pair, (k, ), φ k, d I is obtained by pre-(i.e., left-) (resp., d J φ k, is obtained by post-(i.e., right-)) multiplying φ k, by a path γ k (resp., γ ). This is precisely the induced differential on the tensor product complex I R ⊗ A S J . 5 In the language of [11], Q j is a projective resolution of the standard module associated to the length m + 1 weight λ = (∨ . . . ∨ ∧ ∨ . . . ∨), where the lone ∧ is in the (j ∈ {0, . . . , m})th position.
Lemma 3.11. Let R I , S J be two bigraded bounded complexes of projective modules obtained from admissible bigraded curves in normal form as explained in [24,Sec. 4]. Then the differential on Hom A (R I , S J ) has degree (1, 0).
Proof. By definition, the differential on each of R I , S J has degree (1, 0), implying that the differential on I R and, hence, the differential on has degree (1, 0) as well.
The following lemma was also obtained independently by Klamt has the following explicit description: where the bigradings on generators are given by: and the multiplication is given by: Proof. We know from [24,Prop. 4.9] that as an F-vector space, i B Kh j is free of rank 0 when i < j, 1 when i = j, and 2 when i > j.
Indeed, one sees by direct calculation that when i < j the chain complex splits as the direct sum of two acyclic subcomplexes. When i = j, the chain complex splits in a similar fashion, but the first of the two complexes has homology generated by (0) + . . . + (j) and the second is acyclic. When i > j, the chain complex again splits, but now both subcomplexes have non-trivial homology, the first generated by (0) + . . . + (j), and the second generated by (1|0) + . . . + (j + 1|j).
Denote the first (resp., second) subcomplex by C 1 (resp., by C x ). Proposition 2.3 now guarantees that B Kh := H * (B) admits an A ∞ structure quasi-isomorphic to B, which we may describe explicitly once we have maps p : satisfying the assumptions of Proposition 2.3. We describe these maps in the case i > j, leaving the completely analogous cases i ≤ j to the reader.
The inclusion map ι is the F-linear extension of: With respect to the bases: for C 1 , and: for C x , the projection map p is the F-linear extension of: The homotopy map h is the F-linear extension of: One can now either see directly that B is formal by applying Lemma 2.16 or simply note that the sum of the two gradings associated to each element in B Kh is 0. As each structure map m n : B Kh ⊗n → B Kh is degree (2 − n) on this sum, nontrivial multiplications are only possible when n = 2.
Verification that the bigradings and multiplication are as stated is a straightforward calculation.
Remark 3.13. The algebra B Kh is isomorphic to the algebra of lower triangular (m + 1) × (m + 1) matrices over F[x]/(x 2 ) with only 0's and 1's on the main diagonal: We define an algebra isomorphism by sending the generator i 1 j ∈ i B Kh j (resp., i x j ∈ B Kh j ) to the (m + 1) × (m + 1) matrix whose only nonzero matrix entry is a 1 (resp., an x), located in row number i and column number j (where we assume that rows and columns are numbered from 0 to m).
We close our discussion of B Kh with a technical lemma that will prove useful in our construction of the braid group action on D ∞ B Kh (in particular, in the proof of Proposition 3.18).
is a quasi-isomorphism. Furthermore, there exists an A ∞ quasi-isomorphism of B Kh -modules, p B : B → B Kh , whose first few terms are given by: • 0 for all other basis elements a ∈ B, b ∈ B Kh in the proof of Lemma 3.12.
Recall that the "Transfer Theorem" [9, Thm. 2.1] tells us how to extend ι, p to A ∞ quasi-isomorphisms. Explicitly, one defines and constructs higher terms of ι B , p B satisfying the A ∞ relations for morphisms. Since ι, p induce isomorphisms on homology, ι B and p B will then yield We begin by calculating the higher terms of ι B . But here our work is already done, since ι, p, and h satisfy the assumptions of Lemma 2.16, hence (ι B ) (n1|1|n2) = 0 for all (n 1 + 1 + n 2 ) > 1, as desired.
We now move to the calculation of the higher terms of p B .
Computation of (p B ) (1|1|0) : Here we note that ph = 0, and Im(h) and Im(m B 1 ) are both left B Kh submodules, so an application of Lemma 2.17, implies that p : B → B Kh is a left module map (and, hence, we can extend p to a left A ∞ morphism with no higher left A ∞ terms). In particular, (p B ) (1|1|0) := 0, as desired.
Computation of (p B ) (0|1|1) : Unfortunately, Im(h) and Im(m B 1 ) are not right B Kh submodules, so we will have to work harder here. The Transfer Theorem ([9, Thm. 2.1]), combined with remarks in the proof of Lemma 2.17, tells us that = 0 unless the triple i, j, k satisfies the property that i ≤ j, j > k, and i ≥ k. We can see this by a case-by-case analysis (see the table below, which describes (p B ) (0|1|1) in the various cases). For example, if j < k (first column of table) then j b k = 0, and if i < k (first entry in second column), then p (0|1|0) := 0. In both cases, we then have (p B ) (0|1|1) ( i a j ⊗ j b k ) = 0. On the other hand, when i > j ≥ k or i = j = k (the remaining entries in the table except the top two in the third column), we notice that Since ph = 0, we have (p B ) (0|1|1) = 0 in these cases as well.
We are therefore left to compute (p B ) (0|1|1) when i ≤ j, j > k, and i ≥ k (the starred entries of the table). There are three subcases.
Case 1: i < j, j > k, and i = k Here, we notice that for basis elements i a j , j b k , we have In this case, Case 2: i < j, j > k, and i > k Again, we notice that for basis elements i a j , j b k , we have • i a j = ( + 1| | + 1) for k ≤ ≤ i and j b k = j 1 k , in which case Case 3: i = j > k An analysis similar to the previous cases allows us to conclude that Since each i Q ⊂ Q * is a complex of projective right modules over A, the functor Q * ⊗ A − is exact, so F is clearly well-defined. To prove that G is also well-defined, we will show that the right dg B-module Q) is homotopy equivalent to a semi-free dg Bmodule, and so tensoring with this dg B-module is exact.
Let M C( i 1 i−1 ) denote the mapping cone of the chain map i 1 i−1 : . There is an A-linear chain map ι : P i → M C( i 1 i−1 ) given by the inclusion of P i into Q i , and an A-linear chain map p : M C( i 1 i−1 ) → P i given by We leave it to the reader to verify that pι = Id and ιp = Id + ∂h + h∂, where ∂ is the differential in M C( i 1 i−1 ) and h : Thus P i is homotopy equivalent to the mapping cone of the chain map ∈ A for f ∈ Q * and q ∈ Q is an isomorphism of dg bimodules. We first note that the differential in Q ⊗ B Q * is trivial because the differential in Q (resp., Q * ) is given by right (resp., left) multiplication with the element and so the differential in Q ⊗ B Q * is equal to b ⊗ Id + Id ⊗ b = 2(b ⊗ Id) = 0. Since the differential in A is trivial as well, it thus suffices to show that ψ is a homotopy equivalence.
However, we have already seen that Q is homotopy equivalent to a sum of complexes of the form i B → i−1 B where i B = Hom A (Q i , Q), and an analogous argument shows that Q * is homotopy equivalent to a sum of complexes of the form B i−1 → B i where B i := Hom A (Q, Q i ), and B is homotopy equivalent to a sum of complexes of the form . Moreover, one can check that under these various homotopy equivalences, the map ψ corresponds to the canonical map from and now the fact that ψ is a homotopy equivalence follows from the identities To understand the braid group action on D ∞ (B Kh ), recall (see [24,Sec. 2d]) that Khovanov-Seidel associate • to the elementary Artin generator σ + k the dg A-bimodule Accordingly, we denote by M σ + k (resp., M σ − k ) the mapping cone After an application of Lemma 3.10: the induced maps β k , γ k can be described as β k = Id⊗β k ⊗Id and γ k = Id⊗γ k ⊗Id.
To further streamline notation, we set We will also find it convenient to replace the mapping cones M σ ± k with simpler, quasi-isomorphic, mapping cones. We do this by replacing each bimodule B and P k ⊗ k P by its homology and the maps β k , γ k by the induced maps on homology.
We already understand the structure of B Kh = H * (B) (Lemma 3.12). The homology of P k (resp., k P ) is described by: Lemma 3.16. P k (resp., k P ) is formal as a left (resp., right) B Kh module.
Furthermore, P Kh k := H * P k and k P Kh := H * k P have the following explicit descriptions.
and right multiplication by a generator θ ∈ B Kh on k P Kh is given by: Proof. By Lemma 3.10, Hom A (Q i , P k ) is given by the complex i Q ⊗ A P k and This implies that P k , k P are given by: We see from above that i Q ⊗ A P k is: • rank one, generated by (k−1|k) ∈ k−1 P k , with 0 differential, when i = k−1, • a direct sum of Span (k|k − 1|k) ⊂ k P k and the acyclic subcomplex when i = k, and • a direct sum of the two acyclic subcomplexes To show formality of P k , we use Lemma 2.17 to show that all induced multiplications When i ≤ k − 1, Hom A (Q i , P k ) has trivial differential, so the maps ι i , p i , h i are clear. In the case i ≥ k, we define: as follows.
Let θ denote any generator of Hom A (Q i , P k ), let u * denote the lone generator of H * (Hom A (Q k , P k )), and let ∂ denote the differential on the complex Hom A (Q i , P k ). Note that H * (Hom A (Q i , P k )) = 0 for i > k. Then we define ι i , p i , h i to be the F-linear extensions of: In the above, ∂ −1 (θ) is defined to be the (unique) basis element θ satisfying ∂(θ ) = θ.
It is now straightforward to verify that (1) p i h i = 0 for all i, and (2) Im(h i ) and Im(∂) are left B Kh -submodules.
Therefore P k is formal by Lemma 2.17. To see that k P is also formal, we perform a very similar computation, observing that k P satisfies the assumptions of Lemma 2.16 as a right B Kh -module, hence is formal. Now, we simply note that H * ( P k ) is rank 2, generated by as is H * ( k P ), generated by • u := p k (k) ∈ k P k ⊂ Hom A (P k , Q k ) and • Recalling (see the proof of Lemma 3.12) that the generators i 1 j (for i ≥ j) and i x j (for i > j) of B Kh are represented by (0)+. . .+(j) and (1|0)+. . .+(j+1|j), we see that the multiplication is also as claimed.
We now have the proposed model To understand the induced maps on homology, we must explicitly understand the quasi-isomorphisms B ↔ B Kh and P k ⊗ k P ↔ P Kh k ⊗ k P Kh . Explicitly, if ι P ⊗ ι P : P Kh k ⊗ k P Kh → P k ⊗ k P and p B : B → B Kh are A ∞ quasi-isomorphisms, then the induced A ∞ morphism on homology is given by: Furthermore, (cf. [48,Cor. 3.16]), the mapping cones satisfy: Similarly, if ι B : B Kh → B and p P : P k ⊗ k P → P Kh k ⊗ k P Kh are A ∞ quasi-isomorphisms, then: is the F-linear B Kh -bimodule map (i.e., strict A ∞ morphism) determined by Proof. We must compute the terms of the induced A ∞ morphism γ Kh k := (p P ⊗ p P ) • β k • ι B , as described above.
We begin by noting that the (n 1 |1|n 2 ) map of the A ∞ morphism γ Kh k , i.e., the map is degree (−(n 1 + n 2 ), 0) with respect to the bigrading. This follows from the A ∞ relations for morphisms, combined with Lemma 3.11.
An examination of the bigradings of elements of B Kh and P Kh k ⊗ k P Kh then immediately implies that γ Kh k (n1|1|n2) = 0 unless n 1 = n 2 = 0, so γ Kh k is a strict A ∞ isomorphism, as desired. A quick way to see this is to notice that the sum of the two gradings associated to each element in B Kh and P Kh k ⊗ k P Kh {−1} is 0, and (γ k ) (n1|1|n2) is degree −(n 1 + n 2 ) on this sum.
It is now easy to verify that is as described. In particular, γ Kh k is determined by its behavior on the (m + 1) idempotents i 1 i ∈ B Kh , since it is a B Kh -bimodule map.
For example: We leave the remaining similarly straightforward computations to the reader.
where the terms of the A ∞ morphism β Kh k are given as follows.
When n 1 = 1, n 2 = 0: is the trilinear map satisfying: Proof. As in the proof of Proposition 3.17, the (n 1 |1|n 2 ) map of the A ∞ morphism β Kh k is degree (−(n 1 + n 2 ), 0) with respect to the bigrading. In this case, however, we see that the sum of the two gradings for each element in P Kh k ⊗ k P Kh is 1, while the sum of the two gradings associated to each element in B Kh is 0. Since β Kh k (n1|1|n2) is degree −(n 1 + n 2 ) on this sum, we conclude that β Kh k (n1|1|n2) = 0 unless −(n 1 + n 2 ) = −1, as claimed. To calculate β Kh k (n1|1|n2) in the relevant cases (n 1 = 1, n 2 = 0) and (n 1 = 0, n 2 = 1), we recall that β Kh k : P Kh k ⊗ k P Kh → B Kh is given by the composition Calculation of β Kh k (1|1|0) : Since β k is, by definition, a strict A ∞ morphism, we see that Furthermore, we showed during the proof of Lemma 3.14 that p (1|1|0) := 0, so the first term above also vanishes, leaving: Another application of the Transfer Theorem [9, Thm. 2.1] tells us that on basis elements b ∈ B Kh and θ ∈ P Kh k , we have Composing the above with p (0|1|0) • β k yields the desired result. We perform this computation in one case, leaving the small number of remaining (similarly straightforward) computations to the reader. Assume i ≥ k + 1. Then: and an application of Lemma 2.16 (see the proof of Lemma 3.16) implies that ι (0|1|1) := 0, leaving: Referring to Lemma 3.14, we again perform a sample computation, leaving the remaining computations to the reader. Assume j ≤ k − 1. Then: Now, if we have a general braid group element σ ∈ B m+1 that decomposes as σ = σ ± k1 · · · σ ± kn , [24] associates to σ ∈ B m+1 the dg bimodule: over the algebra A (or, rather, its equivalence class in D b (A)). Considered as an element of D ∞ (A), we can alternatively describe M σ in terms of an A ∞ tensor product, by the following. Since each M σ ± k is a bounded complex of sweet bimodules over A whose higher multiplications are all trivial, the ordinary tensor product above agrees with the A ∞ tensor product in D ∞ (A). In other words, Since A ∞ tensor products are sent to A ∞ tensor products under the derived equivalence D ∞ (A) ↔ D ∞ (B) ↔ D ∞ (B Kh ), we see that the element of D ∞ (B Kh ) associated to a general braid σ = σ ± k1 · · · σ ± kn ∈ B m+1 is given by: Remark 3.21. The B Kh modules described here (and, more generally, any A ∞ module over the Hom algebra of a basis of curves) are equipped with three gradings: (1) a (co)homological grading, (2) an internal grading counting steps to the left in the path algebra, A m , which corresponds to the power of t under the identification of the Khovanov-Seidel construction with a categorification of the Burau representation (see [24,Sec. 2e]), (3) a grading by path length in the path algebra, A m , which corresponds to Khovanov's j (quantum) grading if one identifies the Khovanov-Seidel quiver algebra A m with the algebra A 1,m appearing in [12,51].
The first two of these gradings constitute the bigrading described in [24,Sec. 3d] and discussed throughout this section.
3.5. B Kh and Fukaya categories. For completeness, and to motivate the constructions in the next section, we briefly outline a geometric interpretation of the algebra B Kh and the bimodules M Kh σ ± i , in terms of the Fukaya category of a suitable Lefschetz fibration [46,48,49]. (Since the construction in [48] does not work over Z/2Z, the setup of [49] is the most appropriate one here.) Namely, denote by p a polynomial of degree m + 1 whose roots are exactly the points of ∆, and consider the complex surface S = {(x, y, z) ∈ C 3 | x 2 + y 2 = p(z)}. The projection to the z coordinate defines a Lefschetz fibration π S : S → C, whose generic fiber is an affine conic, and whose m + 1 vanishing cycles are all isotopic to each other. The basis of arcs Q = {q 0 , . . . , q m } of Figure 5 then determines a collection of Lefschetz thimbles Q S 0 , . . . , Q S m (i.e., Lagrangian disks in S whose boundaries are the vanishing cycles in the fiber π −1 S (−1)). These form an exceptional collection which generates the (directed) Fukaya category F(π S ) of the Lefschetz fibration π S [48,49].
Perturbing the symplectic structure slightly, we can ensure that the vanishing cycles (which are Hamiltonian isotopic loops in π −1 S (−1) C * ) are mutually transverse and intersect in a suitable manner (i.e., they pairwise intersect in exactly two points, and the intersection points are arranged in a configuration which forces the vanishing of higher products on Floer complexes within the ordered collection).
The Floer complexes which determine morphisms from Q S i to Q S j in the directed Fukaya category then have rank 2 whenever i > j, while by definition these morphism spaces have rank 1 for i = j and 0 for i < j [46]. (Note: our ordering convention for bases of arcs is the opposite of Seidel's.) Moreover, an easy calculation in Floer homology then shows that is isomorphic to B Kh (viewing both as A ∞ -algebras, in which m n happens to vanish for n = 2). The categories of modules over F(π S ) and B Kh are therefore equivalent.
In fact, the B Kh -module P Kh k has a geometric counterpart via this equivalence, namely a Lagrangian sphere P S k in S which projects under π S to a line segment connecting two consecutive points of ∆. Indeed, P S k intersects Q S k−1 and Q S k in one point each, and is disjoint from the other Q S i ; it is then not hard to check that i Hom F (π S ) (Q S i , P S k ) P Kh k as an A ∞ -module over B S B Kh ). See Chapter 20 of [48] for more about the symplectic geometry of S.
Elements of the braid group B m+1 acting on (D m , ∆) lift to symplectic automorphisms of S preserving the fiber π −1 S (−1); specifically, the Artin generator σ k lifts to the Dehn twist about the Lagrangian sphere P S k . Denoting again by σ the symplectic automorphism of S which corresponds to a braid σ ∈ B m+1 , we associate to it the A ∞ -bimodule
Bordered Floer algebras and bimodules
We now consider the analogues in bordered Floer homology of the Khovanov-Seidel bimodules described in Section 3. We follow Lipshitz-Ozsváth-Thurston in [32,33,34] and Zarev in [54], using a symplectic reinterpretation of their work due to the first author [4].
4.1. The bordered Floer algebra. Denote by Σ the double cover of D m branched at the m + 1 points of ∆ (with covering map π Σ : Σ → D m ). We make Σ a parametrized surface by equipping it with two marked points z ± on its boundary (the two preimages by π Σ of a point in ∂D m ) and the collection of arcs . In the language introduced by Lipshitz, Ozsváth and Thurston [32], the parametrized surface (Σ, z ± , Q Σ ) is described combinatorially by a (twice) pointed matched circle (or pair of circles when m is odd), Z Q . This consists of a pair of oriented intervals (the two components of ∂Σ \ {z ± }), each carrying m + 1 distinguished points (the end points of disjoint pushoffs of the Q Σ k ), labeled successively in decreasing order m, . . . , 1, 0 along each interval (according to the manner in which the end points of the 1-handles Q Σ k match up). Recall that the 1-moving strands algebra A (Z Q , 1), 7 which we also denote by B HF for consistency with the preceding sections, can be described as: 7 Here we use the notation convention from [54], which differs by a shift from the one in [32].
and the multiplication m HF We also set m HF n = 0 for n = 2.
Remark 4.1. Let Fρ ⊕ Fσ denote the F-algebra generated by two orthogonal idempotents ρ and σ, and let 1 := ρ + σ be its identity element. As we did in the previous section for B Kh (Remark 3.13), we can interpret B HF as the algebra of all lower triangular (m + 1) × (m + 1) matrices over Fρ ⊕ Fσ which have only 0's and 1's on the main diagonal: We identify the generator i ρ j ∈ i B HF j (resp., i σ j ∈ i B HF j ) with the (m+1)×(m+1) matrix whose only nonzero matrix entry is a ρ (resp., a σ), located in row number i and column number j; and we identify the generator i 1 i ∈ i B HF i with the (m + 1) × (m + 1) matrix whose only nonzero entry is a 1, located on the diagonal in row number i. (Here we assume that rows and columns are numbered from 0 to m).
The 1-moving strands algebra has a more geometric interpretation in terms of the arcs Q Σ 0 , . . . , Q Σ m on the surface Σ. Namely, these arcs (or small isotopic deformations of them) are objects of (and in fact generate) the "partially wrapped" Fukaya category of Σ relatively to the two marked points z ± (see [3,4]). In this category, the morphism spaces hom(Q Σ i , Q Σ j ) are the Floer complexes generated by intersections between suitably perturbed copies of the arcs (namely, using the flow of a suitable Hamiltonian to ensure transversality and push the end points so that they lie in a specific position along the components of ∂Σ \ {z ± }). In our case, {z ± } is a fiber of the covering map π Σ , which is in fact a Lefschetz fibration. The partially wrapped Fukaya category is then equivalent to F(π Σ ), Seidel's Fukaya category of the Lefschetz fibration π Σ (see the Remark in section 4 of [3]), and the Q Σ i are nothing but the Lefschetz thimbles associated to the basis of arcs Q of Figure 5.
Note that the technical setup in [4] is somewhat different from those in [46] and [49], even though the resulting categories are equivalent and, in the case at hand, all calculations for the thimbles Q Σ i give exactly the same answer on the nose. We use the notation F(π Σ ) for familiarity; however the comparison with bordered Floer homology is simpler in the setup of [4], see Remark 4.3 below.
The Floer complexes which determine morphisms from Q Σ i to Q Σ j have rank 2 whenever i > j, while these morphism spaces have rank 1 for i = j and 0 for i < j. In the setting of [4], this is because the image of Q Σ i under the appropriate Hamiltonian [4, §4.2] intersects Q Σ j transversely in 0, 1 or 2 points depending on cases; while in the directed Fukaya category of [46], this is because the vanishing cycles consist of the same two points in the case i > j, and by definition in the other cases. (As before, our ordering convention for bases of arcs is the opposite of Seidel's.) An easy calculation in Floer homology then shows that is isomorphic to B HF , viewing both as A ∞ -algebras in which m n happens to vanish for n = 2 (cf. [3,4]). The categories of modules over F(π Σ ) (in any of its incarnations) and B HF are therefore equivalent.
Bordered Floer bimodules.
Elements of the braid group B m+1 acting on (D m , ∆) lift to elements of the mapping class group of the double cover Σ; specifically, the Artin generator σ k lifts to the Dehn twist about the simple closed curve P Σ k = π −1 Σ (p k ), where p k is the line segment in D m joining the two points labeled k−1 and k (see Figure 7). We denote byσ the mapping class group element which lifts a braid σ ∈ B m+1 . With this understood, there are two natural ways of associating an A ∞ -bimodule over B HF to a braid σ.
On one hand, Lipshitz, Ozsváth and Thurston [33] associate to the element σ of the mapping class group a bimodule CF DA(σ) over the strands algebra, defined in terms of a suitable Heegaard diagram for the "mapping cylinder" ofσ, i.e. the 3-manifold Σ × [0, 1] equipped with parametrizations of the two boundary components which differ by the action ofσ (see [32,33] for details). We denote by M HF σ the 1-moving strand part of CF DA(σ); this is an A ∞ -bimodule over B HF (in fact a "type DA" bimodule, which has nicer algebraic properties).
On the other hand,σ acts on the Fukaya category of π Σ , and the A ∞ -functor induced byσ naturally yields a bimodule over F(π Σ ), hence over B Σ . More concretely, following [3] (see also [34]) we set which is naturally an A ∞ -bimodule over B Σ B HF . Proof. It is known [33] that the bordered bimodule CF DA(id) is quasi-isomorphic to the strands algebra viewed as a bimodule over itself; therefore M HF B Σ M Σ id (as bimodules). We now give a more geometric interpretation, still in the case σ = id.
Following the terminology in [34], denote by AZ the bordered Heegaard diagram depicted in Figure 6, in which the α-arcs and the β-arcs are obtained from Q Σ k by pushing the end points along the boundary of Σ, in such a manner that the end points of the α-arcs all lie before those of the β-arcs along the oriented intervals ∂Σ \ {z ± }. Then the 1-moving strand part of the A ∞ -bimodule CF AA(AZ) is quasi-isomorphic to M HF id B HF ; in fact, CF AA(AZ) CF DA(id) A(Z Q ) [4,54,34]. Thus it is enough to show that the 1-moving strand part of CF AA(AZ) is quasi-isomorphic to M Σ id = B Σ . To understand this, recall that morphisms in F(π Σ ) are computed by perturbing the arcs to the same positions used in the Heegaard diagram AZ. Hence, the generators of Hom(Q Σ i , Q Σ j ) are precisely the intersection points between β i and α j , i.e. the generators of the 1-moving strand type AA bimodule. Moreover, the structure maps m (k|1| ) count: • in the case of the type AA bordered Floer bimodule CF AA(AZ), holomorphic strips in Σ connecting two generators of the Heegaard-Floer complex, and with k (resp. ) additional strip-like ends corresponding to chords between β (resp. α) arcs; • in the case of M Σ id (bimodule over the Fukaya category), rigid holomorphic polygons bounded by k + 1 successively perturbed copies of the β-arcs and + 1 successively perturbed copies of the α-arcs (in the setting of [4]; with other definitions of F(π Σ ) the interpretation is slightly different). However, there is a natural correspondence between these two types of objects; see Proposition 6.5 of [4] and its proof for details.
In the case of an arbitrary braid σ, denote byσ(AZ) the bordered Heegaard diagram obtained from AZ by havingσ act on the α-arcs (leaving the β-arcs unchanged). From the perspective of Heegaard-Floer theory, the bordered 3-manifold represented byσ(AZ) differs from that corresponding to AZ by a reparametrization of its α-boundary via the action ofσ, or equivalently, by attaching the mapping cylinder ofσ. Thus CF AA(σ(AZ)) CF AA(AZ) ⊗ CF DA(σ) CF DA(σ).
Hence M HF σ is quasi-isomorphic to the 1-moving strands part of CF AA(σ(AZ)). On the other hand, by the same argument as above this latter bimodule is quasi- Remark 4.3. The comparison between the higher structure maps of the bimodules defined from F(π Σ ) and from bordered Floer homology is easiest in the setup of [4], where a specific Hamiltonian flow is used to perturb the Lagrangians and ensure transversality, and the structure maps count honest holomorphic curves bounded by successively perturbed copies of the Lagrangians (see Lemma 4.7 of [4]: the definition of the partially wrapped category is much more cumbersome, but in the case at hand it simplifies vastly). The reader who wishes to reproduce this argument using Seidel's definition of F(π Σ ) instead is referred to [49], where the directed Fukaya category is recast in terms of the symplectic geometry of the thimbles and solutions to Floer's equation with Hamiltonian perturbations. The relevant Hamiltonians behave essentially in the same manner as that of [4], and the main remaining difference is that one counts solutions to a perturbed holomorphic curve equation with boundary on the original Lagrangians, rather than (cascades of) honest holomorphic curves with boundary on perturbed copies of the Lagrangians. The two counts can be compared by a fairly standard argument, or alternatively the proof of [4, Proposition 6.5] can be adapted to that setting.
If a braid σ can be expressed in terms of the Artin generators as σ = σ ± k1 . . . σ ± kn , then its lift can be written asσ =σ ± k1 . . .σ ± kn , and the pairing theorem for CFDA bimodules [32,33] implies that Thus it is enough to understand the bimodules M HF σ ± k M Σ σ ± k associated to the Artin generators and their inverses. We do this working in the category F(π Σ ). Recall that morphism spaces in that category are defined by Lagrangian Floer theory after a suitable perturbation (so the end points of arcs lie in the correct order along the boundary of Σ); in particular they are generated by intersection points.
Focusing first on M HF σ + k , and recalling thatσ + k is the positive Dehn twist about P Σ k , Seidel's exact triangle for Lagrangian Floer homology [47] tells us that, for each i, j ∈ {0, . . . , m}, Hom where β HF k is the Floer product map (cf. [47]) induced by counting holomorphic triangles in Σ whose sides lie on (suitable perturbations of) Q Σ i , P Σ k , Q Σ j , appearing in counterclockwise order around the boundary. Moreover, these quasi-isomorphisms are compatible with Floer products, in the sense that in D ∞ (B HF ) the bimodule M HF σ + k is equivalent to the complex of bimodules obtained by taking the direct sum of the above complexes over all i, j.
In analogy to the previous section, we introduce the A ∞ -modules which allows us to write Like the linear term described above, the higher terms i0,...,in 1 j0,...,jn 2 of the A ∞ -bimodule homomorphism β HF k count rigid holomorphic polygons in Σ whose sides lie on (suitable perturbations of) Q Σ where γ HF k is induced by counting holomorphic triangles in Σ whose sides lie on (suitable perturbations of) P Σ k , Q Σ i , Q Σ j , appearing in counterclockwise order around the boundary. Thus, in D ∞ (B HF ) we have where the higher terms of the A ∞ -bimodule homomorphism γ HF k again count rigid holomorphic polygons in Σ.
We remark that, in our very simple setting, these counts are equivalent (by the Riemann mapping theorem) to counts of topological immersed triangles in Σ with the stated boundary conditions, and satisfying a local convexity condition at their corners.
Explicit calculations.
We now make the above story more explicit, by determining the left (resp., right) A ∞ -modules P HF k (resp., k P HF ) and the maps β HF k and γ HF k . Since P Σ k intersects Q Σ k−1 and Q Σ k transversely once each and is disjoint from all the other Q Σ j , the vector spaces underlying these modules have rank 2. The multiplication maps m (n|1|0) : B HF ⊗n ⊗P HF k → P HF k and m (0|1|n) : k P HF ⊗ B HF ⊗n → k P HF are given by counting holomorphic (n + 2)-gons in Σ as in Figure 7. Again letting the two generators of P HF k (resp., of k P HF ) be denoted by u * , v * (resp., by u, v) and letting θ represent an element of B HF , it is easily verified (see Figure 8) that the m (1|1|0) (resp., m (0|1|1) ) multiplication is given by: Figure 8. The holomorphic triangles giving rise to the nontrivial multiplication maps m (1|1|0) : . The other nontrivial multiplication maps can be seen in a similar manner.
A more conceptual explanation is that it is possible to find a trivialization of the tangent bundle of Σ and graded lifts [48] of the Lagrangians P Σ k , Q Σ 0 , . . . , Q Σ m , and hence a Z-grading by Maslov index on B HF and the modules P HF k , k P HF , with the following properties: • all the generators of B HF have degree 0; • the generators u * , v * of P HF k have the same degree. • the generators u, v of k P HF have the same degree. Not all degrees can be taken to be zero: in fact deg u+deg u * = deg v+deg v * = 1.
Since the maps m (n|1|0) and m (0|1|n) are compatible with the grading and have degree 1 − n, this forces their vanishing unless n = 1.
We now turn to the A ∞ morphisms β HF k and γ HF k . The calculations are simplified by constraints arising from the Maslov Z-grading.
First, we observe that β HF k is a degree-preserving A ∞ -homomorphism of bimodules. Namely, since (β HF k ) (n1|1|n2) corresponds to a Floer product of order (n 1 + n 2 + 2) in F(π Σ ), it has degree −(n 1 + n 2 ). However, P HF k ⊗ k P HF is Figure 9. The above diagram verifies both that the linear part of β HF is zero, and that the map By definition, these maps count holomorphic triangles with boundary on P Σ k and on two perturbed copies of Q Σ k , denoted by (Q Σ k ) 1 and (Q Σ k ) 2 in the picture; in counterclockwise order, the successive edges must lie on ( Hence, the shaded topological triangle does not contribute to β HF k , because its boundary has the incorrect orientation, hence it does not admit a holomorphic representative. However, it does contribute to the map γ HF k . Computations for the pairs (i, j) = (k, k − 1), (k − 1, k − 1) are similarly straightforward. concentrated in degree 1, while all the generators of B HF have degree 0. Therefore, the only non-trivial terms in β HF k are those of degree −1, namely (β HF k ) (1|1|0) and (β HF k ) (0|1|1) . In particular the linear term β HF k : Hom vanishes identically. Similarly, γ HF k , which is an A ∞ -refinement of the pair of pants coproduct in Floer homology, has degree dim C (Σ) = 1 with respect to the Maslov Z-grading. Hence, the map (γ HF k ) (n1|1|n2) has degree 1 − (n 1 + n 2 ) and, for degree reasons, it must vanish identically unless n 1 + n 2 = 0. Thus, the only nontrivial term of γ HF k is the linear one.
The calculations are further simplified by recalling that Lemma 4.4. γ HF k : B HF → P HF k ⊗ k P HF is the bimodule map determined by u * ⊗ u when i = k, and 0 otherwise and by associativity with respect to the multiplication. Moreover, the higher order maps (γ HF k ) (n1|1|n2) vanish identically for (n 1 , n 2 ) = (0, 0).
, since in all other cases either the domain or the target is zero. The nontrivial cases are then determined by counting immersed triangles in Σ; the case (i, j) = (k, k) is shown in Figure 9. By inspection, we see that γ HF k is given by: The vanishing of the higher maps follows from the degree argument explained above.
The story for β HF k is slightly more complicated, because the maps which count holomorphic 4-gons in Σ, depend on the choice of Hamiltonian perturbations used to resolve triple intersections at the branch points of π Σ . (Of course, the behavior of Lagrangian Floer homology under Hamiltonian isotopies guarantees that the maps obtained from different choices are homotopic.) To fix a convention, we perturb P Σ k away from the branch points of π Σ in such a way that its intersections with Q Σ k and Q Σ k−1 occur on the sheet of the double cover that contains the generators i ρ j . With this understood, we have: Figure 10. The above diagram verifies that counts rigid holomorphic 4-gons with successive edges, in counterclockwise order, on perturbed copies of . The only contribution comes from the shaded region. Figure 11. The above diagram verifies that and (β HF k ) (0|1|1) : Proof. By definition, (β HF k ) (1|1|0) counts rigid holomorphic 4-gons in Σ whose successive edges, in counterclockwise order, lie on suitably perturbed copies of the following Lagrangians: Q Σ i ; either Q Σ k (for u * ) or Q Σ k−1 (for v * ); P Σ k ; and either Q Σ k (for u) or Q Σ k−1 (for v). The count depends on the perturbations, so we have to be more specific.
Since we are working in the Fukaya category F(π Σ ), the various arcs must be perturbed by Hamiltonian isotopies which ensure that their end points are suitably ordered along ∂Σ; these perturbations are responsible for the intersection points corresponding to the generators i ρ k and i σ k (resp. i ρ k−1 , i σ k−1 ), which we take to lie close to the boundary of Σ. By contrast, the intersection points corresponding to the generators u * , u and k 1 k normally all lie at the k-th branch point of π Σ , and perturbations are needed to avoid triple intersections. As mentioned above, we achieve this by choosing a Hamiltonian which pushes P Σ k slightly towards the "ρ" side of the surface. Likewise for v * , v and k−1 1 k−1 .
With this understood, the calculation simply becomes a matter of drawing the relevant diagrams and looking for immersed four-gons with locally convex corners. The first two cases are shown on Figures 10 and 11; the others are similar.
As a consistency check, it is not hard to verify that the map β HF k is indeed an A ∞ -homomorphism, namely for all a 1 , a 2 ∈ B HF and m ∈ P HF k ⊗ k P HF we have the identities
A spectral sequence from the Khovanov-Seidel to the bordered Floer algebra
In Sections 3 and 4 we showed how to use the data of a basis, Q, to construct • a graded algebra, B Kh , using a construction of Khovanov-Seidel in [24] and • a (graded) algebra B HF , using ideas of Lipshitz-Ozsváth-Thurston in [32] as generalized by Zarev in [54] and reinterpreted by the first author in [4]. In this section, we establish the existence of a spectral sequence connecting B Kh and B HF . Explicitly, we prove: be the homology of the Hom algebra associated to the basis Q and let B HF := A (Z Q , 1) be the 1-moving strands algebra associated to the arc diagram, Z Q . There exists a filtration on B HF whose associated graded algebra is isomorphic, as an ungraded algebra, to B Kh . Accordingly, one obtains a spectral sequence whose E 1 page is isomorphic to B Kh and whose E ∞ page is isomorphic to B HF . Remark 5.2. The observant reader will at this point notice that the spectral sequence described in the statement of Theorem 5.1 must be somewhat unusual, since B HF is not a dg algebra but an algebra; hence, the induced differential on the associated graded page is necessarily trivial and the associated spectral sequence on F-vector spaces collapses immediately. This should perhaps not be surprising, for each i, j ∈ {0, . . . , m}. On the other hand, B Kh and B HF are not isomorphic as algebras. The filtration serves only to alter the multiplicative structure on the underlying algebra and not to change the dimensions of the underlying F-vector spaces.
We pave the way for a proof of Theorem 5.1 by focusing first on a "toy model" given by the following two lemmas. Though not logically necessary for the proof of Theorem 5.1, we include them in order to motivate the definition of the filtration yielding the spectral sequence from B Kh and B HF . Lemma 5.3. There exists a filtered differential algebra, C, whose associated graded homology algebra is isomorphic to H * (S 1 ) and whose total homology algebra is isomorphic to H * (S 0 ). Furthermore, the associated graded complex and the total complex of C are formal A ∞ algebras.
Proof. We construct C using a Z/2Z-equivariant cochain complex for H * (S 1 ). Specifically, identify S 1 with the unit circle in C and give it the structure of a simplicial complex by placing two 0-simplices labeled a and b at −1 and 1, respectively, and two 1-simplices labeled A and B along the arcs e iθ |θ ∈ [π, 0] and e iθ |θ ∈ [−π, 0] , respectively, as in Figure 12. Let a * (resp. b * , A * , B * ) represent the Z/2Z cochain that assigns 1 to a (resp., b, A, B) and 0 to all other simplices in the basis.
The filtered differential algebra, C, is generated by a * , b * , A * , and B * with multiplication given by the cup product on cochains (cf. [17]): There are two commuting differentials, δ and ∂ τ , on C, giving C the structure of a differential algebra: • δ is the standard coboundary map on the simplicial cochain complex (hence satisfies the Leibniz rule with respect to the cup product multiplication), and • ∂ τ = 1 + τ , where τ is the involution on the cochain complex induced by complex conjugation on C. One easily checks that ∂ τ satisfies the Leibniz rule with respect to the cup product multiplication. We have the following two-step filtration F −1 ⊆ F 0 ⊆ F 1 : . This gives C the structure of a filtered algebra, since F i · F j ⊆ F i+j for all i, j. 8 Furthermore, the associated graded complex is (C, δ), with homology H * (S 1 ) and the homology of the total complex (C, δ + ∂ τ ) is the cohomology of the fixed point set of τ , i.e., H * (S 0 ).
Let 1 denote the generator of H 0 (S 1 ) and x denote the generator of H 1 (S 1 ). Then we define An application of Lemma 2.16 then implies that the associated graded algebra is formal.
We proceed similarly for (C, δ + ∂ τ ). Let ρ, σ denote the two generators of H * (S 0 ) corresponding to the two connected components of S 0 . We define: Furthermore, the filtration on the filtered differential algebra C defined in the proof of Lemma 5.3 induces a filtration on H * (S 0 ). Accordingly, we have: Lemma 5.4. Consider the following filtration, F −1 ⊆ F 0 ⊆ F 1 , on H * (S 0 ): With respect to this filtration, H * (S 0 ) is a well-defined filtered (differential) algebra with associated graded algebra isomorphic to H * (S 1 ).
Proof. The claim follows immediately from the observation that the A ∞ quasiisomorphism ι : H * (S 0 ) → C guaranteed by Lemma 2.16 is filtered, hence induces a filtered A ∞ quasi-isomorphism.
However, we find it instructive to give a more direct proof. First, H * (S 0 ) is easily seen to be a well-defined filtered (A ∞ ) algebra (Definition 2.9) with respect to the above choice of filtration. The only non-trivial check that must be performed is that m 2 ((ρ + σ) ⊗ (ρ + σ)) ⊆ F 0 , which follows since 1 := ρ + σ is the identity element of H * (S 0 ). Recalling that the multiplication on the associated graded is given by we see immediately that 1 is also the multiplicative identity in gr(H * (S 0 )), since it lies in filtration level 0.
We now proceed to the proof of Theorem 5.1.
Proof of Theorem 5.1. Recalling (see Remark 4.1) that B HF is isomorphic to the algebra of lower triangular (m + 1) × (m + 1) matrices over H * (S 0 ) with only 0's and 1's on the diagonal, we define the desired filtration, F −1 ⊆ F 0 ⊆ F 1 , on B HF as follows: We now claim that the associated graded algebra, gr B HF , is isomorphic to B Kh . To see this, note that In particular, gr(B HF ) is isomorphic to the algebra of (m + 1) × (m + 1) lower triangular matrices over gr(H * (S 0 )) with only 0's and 1's on the diagonal, where the filtration on H * (S 0 ) is the one described in Lemma 5.4. Hence, Lemma 5.4 tells us that gr(B HF ) is isomorphic to B Kh as an F-algebra, as desired.
A spectral sequence from the Khovanov-Seidel to the bordered Floer bimodules
In analogy to Theorem 5.1, we prove the following theorem relating the Hom modules described in Section 3 to the bordered Floer modules described in Section 4.
Recall that Q is the basis (of ∂-admissible bigraded curves in normal form) pictured in Figure 5.
Theorem 6.1. Let σ ∈ B m+1 be a braid, M Kh σ the bimodule associated to the pair ( Q, σ) in Section 3, and M HF σ the bordered Floer bimodule associated to the pair (Q, σ) in Section 4. There exists a filtration on M HF σ whose associated graded bimodule is isomorphic (as an ungraded A ∞ bimodule over B Kh ) to M Kh σ . Accordingly, one obtains a spectral sequence whose E 1 page is isomorphic to M Kh σ and whose E ∞ page is isomorphic to M HF σ .
Note that Theorem 5.1 is Theorem 6.1 in the special case σ = Id. The proof of Theorem 6.1 proceeds in two steps. We begin by giving an explicit construction of the filtration in the special case where σ is one of the elementary Artin braid generators, {σ ± k |k = 1, . . . , m} (Proposition 6.2). Then in the general case, σ = σ ± k1 · · · σ ± kn , we explain how to construct a filtration and appropriate spectral sequence on the A ∞ module formed as the A ∞ tensor product Since we have already shown (in the proof of Theorem 5.1) that the multiplication on gr(B HF ) matches the multiplication on B Kh , all that remains to show is: (1) the multiplication of gr(B HF ) on gr P HF k ⊗ k P HF matches the multiplication of B Kh on P Kh k ⊗ k P Kh and (2) the maps induced by γ HF k and β HF k on gr(B HF ) and gr P HF k ⊗ k P HF match the maps γ Kh k and β Kh k . Seeing that the multiplication of gr(B HF ) on gr P HF k ⊗ k P HF matches the multiplication of B Kh on P Kh k ⊗ k P Kh is a simple check of a small number of cases, bearing in mind that under the isomorphism gr(B HF ) ↔ B Kh , we have the identification i ρ j ↔ i x j .
The map induced by γ HF k on gr(B HF ) is quickly seen to match the map γ Kh k , since γ HF k is a filtered morphism with no higher terms, and the descriptions of γ Kh k (Proposition 3.17) and γ HF k (Lemma 4.4) are identical. Verifying that the map induced by β HF k on gr P HF k ⊗ k P HF matches the map β Kh k is a bit more involved but, again, requires only a handful of checks. We perform a couple here, leaving the rest to the reader. Lemma 4.5 tells us that when i ≥ k + 1: But viewed as elements of the associated graded, we have i ρ k ∈ F 1 /F 0 (B HF ) and u * ⊗ u ∈ F 1 /F 0 P HF k ⊗ k P HF , and thus the induced associated graded map is: β HF k (1|1|0) [ i ρ k ⊗ (u * ⊗ u)] := i ρ k = 0 ∈ F 2 /F 1 (B HF ).
To understand (3), we once again use the identification between enhanced Kauffman states and generators of CKh( σ), the chain complex underlying SKh( σ). One then constructs inverse chain maps CKh( σ; k) ↔ CKh( σ; −k) by reversing the orientations of those circular components of the enhanced Kauffman state representing nontrivial elements of H 1 (A). | 2018-10-31T17:32:02.127Z | 2011-07-14T00:00:00.000 | {
"year": 2011,
"sha1": "2fa62a66cf9cd17f563e49e2c412ffc585df081f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1107.2841",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2fa62a66cf9cd17f563e49e2c412ffc585df081f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
60965336 | pes2o/s2orc | v3-fos-license | Committee Machine Networks to Diagnose Cardiovascular Diseases
A parallel committee machines technique for neural network systems with back propagation together with a majority voting scheme is presented in this paper. Previous research with regards to predict the presence of cardiovascular diseases has shown accuracy rates up to 72.9% but it comes with a cost of reduced prediction accuracy of the minority class. The designed neural network system in this article presents a significant increase of robustness and it is shown that by majority voting of the parallel networks, recognition rates reach to > 90 in the V.A. Medical Center, Long Beach and Cleveland Clinic Foundation data set. Keywords— Machine learning, parallel neural networks, boosting by filtering, cardiovascular diseases
INTRODUCTION
Cardiovascular disease, also called heart disease, is a class of diseases that involve the heart or blood vessels (arteries, capillaries and veins).[1] Cardiovascular disease refers to any disease that affects the cardiovascular system, principally cardiac disease, vascular diseases of the brain and kidney, and peripheral arterial disease.[2] The causes of cardiovascular disease are diverse but atherosclerosis and/or hypertension are the most common.Additionally, with aging come a number of physiological and morphological changes that alter cardiovascular function and lead to subsequently increased risk of cardiovascular disease, even in healthy asymptomatic individuals.[3] Cardiovascular disease is the leading cause of deaths worldwide, though since the 1970s, cardiovascular mortality rates have declined in many high-income countries.[4] At the same time, cardiovascular deaths and disease have increased at a fast rate in low-and middle-income countries.[5]Although cardiovascular disease usually affects older adults, the antecedents of cardiovascular disease, notably atherosclerosis, begin in early life, making primary prevention efforts necessary from childhood.[6] There is therefore increased emphasis on preventing atherosclerosis by modifying risk factors, such as healthy eating, exercise, and avoidance of smoking.
Types of cardiovascular diseases
• Coronary heart disease (also ischaemic heart disease or coronary artery disease) • Cardiomyopathy -diseases of cardiac muscle • Hypertensive heart disease -diseases of the heart secondary to high blood pressure A fairly recent emphasis is on the link between lowgrade inflammation that hallmarks atherosclerosis and its possible interventions.C-reactive protein (CRP) is a common inflammatory marker that has been found to be present in increased levels in patients at risk for cardiovascular disease.[8]Also osteoprotegerin which involved with regulation of a key inflammatory transcription factor called NF-κB has been found to be a risk factor of cardiovascular disease and mortality.Some areas currently being researched include possible links between infection with Chlamydophila pneumoniae (a major cause of pneumonia) and coronary artery disease.The Chlamydia link has become less plausible with the absence of improvement after antibiotic use.[9] Several research also investigated the benefits of melatonin on cardiovascular diseases prevention and cure.Melatonin is a pineal gland secretion and it is shown to be able to lower total cholesterol, very low density and low density lipoprotein cholesterol levels in the blood plasma of rats.Reduction of blood pressure is also observed when pharmacological doses are applied.Thus, it is deemed to be a plausible treatment for hypertension.However, further research needs to be conducted to investigate the side effects, optimal dosage and etc. before it can be licensed for use.[10]
Neural networks for complex medical diagnosis
In this article, an artificial intelligence alternative to the medical diagnosis is proposed.Neural networks are the tools that should be recalled for any classification job.They are developed enormously since the first attempts made modeling the perceptron architecture six decades ago [11].
The massive parallel computational structure of neural networks is what has contributed to its success in predictive tasks.It has been shown that the approach of using parallel networks is successful with respect to increasing the predictive accuracy of neural networks in robotics [12] and in disease diagnosis.
This work presents a parallel networks system which is bound together with a majority voting system in order to further increase the predictive accuracy of a cardiovascular diseases disease data set based on clinic recordings (reference).
For the proposed system it is shown with a case study of cardiovascular diseases.The type of network used is the standard feed forward back-propagation neural network, since they have proven useful in biomedical classification tasks [13].The performance of the trained neural networks is evaluated according to the true positive, and true negative rate of the prediction task.Furthermore the area under the receiver operating characteristic curve and the Mean Squared Error are used as statistical measurements to compare the success of the different models.
The paper is organized as follows; first, the data used in this work is introduced in section 2. The neural network that is boosted by filtering is illustrated in section 3. Results of the research are shown in section 4 which followed by a conclusion.
Source of Data: Cleveland Clinic
The Cleveland Clinic, formally known as the Cleveland Clinic Foundation, is a multispecialty academic medical center located in Cleveland, Ohio, United States.The Cleveland Clinic was established in 1921 by four physicians for the purpose of providing patient care, research, and medical education in an ideal medical setting.
The Cleveland Clinic Lerner Research Institute is home to all laboratory-based, translational and clinical research at Cleveland Clinic.A new medical school, the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, was opened in 2004.The program's curriculum was devised by Cleveland Clinic staff physicians to train and mentor a new generation of physician-investigators.
Database contains 302 data with 76 attributes for each of them, but all published experiments refer to using a subset of 14 of them.The "goal" field refers to the presence of heart disease in the patient.It is integer valued from 0 no presence) to 4. Experiments with the Cleveland database have concentrated on simply attempting to distinguish presence values 1, 2, 3, 4) from absence value 0).[14] 3. PRINCIPAL COMPONENT ANALYSIS Principle component analysis (PCA) finds the linear combination of attributes that best accounts for the variations in the data.Two-dimensional plots of the first two principal components supply us with a means to inspect visually for trends, which occur as clusters of points.Later, cluster analysis may follow this step.
This simple but effective method continues to be used today, partly because of the ease with which the results are communicated and interpreted.
Theory of Principal component Analysis
Multivariate statistics deals with the relation between several random variables.The sets of observations of the random variables are represented by a multivariate data matrix X, Multivariate statistics deals with the relation between several random variables.The sets of observations of the random variables are represented by a multivariate data matrix X, (1) Each column vector represents the data for a different variable.If c is an × 1 matrix, then is a linear combinations of the set of observations.Descriptive statistics can also be applied to a multivariate data matrix X, the sample mean of the kth variable is the sample variance is defined by Next we introduce a matrix that contains statistics that relate pairs of variables ( , ), sample covariance : It follows that = and = 2 , the sample variance.
Matrix of sample covariances
is symmetric.
THEOREM
Let be the × covariance matrix related to the multivariate data matrix X.Let eigenvalues of be 1 ≥ 2 ≥ ⋯ ≥ ≥ 0, and corresponding orthonormal eigenvectors be , , … , .Then ith principal component is given by the linear combination of the original variables in the data matrix X: The variance of is , and cov� , � = 0, ≠ .The total variance of the data in X is equal to the sum of eigenvalues: Proportion of the total variance covered by the "kth principal component" If a large percentage of the total variance can be attributed to the first few components, then these new variables can replace the original variables without significant loss of information.Thus we can achieve significant reduction in data.
4. PRINCIPAL COMPONENTS OF CARDIOVASCULAR DISEASES DATA The information in the covariance matrix is used to define a set of new variables as a linear combination of the original variables in the data matrices.The new variables are derived in a decreasing order of importance.The first of them is called first principal component and accounts for as much as possible of the variation in the original data.The second of them is called second principal component and accounts for another, but smaller portion of the variation, and so on.
If there are p variables, to cover all of the variation in the original data, one needs p components, but often much of the variation is covered by a smaller number of components.Thus PCA has as its goals the interpretation of the variation and data reduction.
In fact PCA is nothing but the spectral decomposition of the covariance matrix.The fourteenth component of the data which is related to the absence and presence of the disease removed, and the principal transformation of the data is realized.The first two principal components of these two types of data are intermingled as seen in Figure 1.In our classification perceptron, first five principal components are found to be satisfactory.Nervous systems existing in biological organism for years have been the subject of studies for mathematicians who tried to develop some models describing such systems and all their complexities.Artificial neural networks emerged as generalizations of these concepts with mathematical model of artificial neuron due to McCuloch and Pitts [15] described in 1943 definition of unsupervised learning rule by Hebb [16] in 1949, and the first ever implementation of Rosenblatt's perceptron [17] in 1958.The efficiency and applicability of artificial neural networks to computational tasks have been questioned many times, especially at the very beginning of their history the book "Perceptrons" by Minsky and Papert [18], published in 1969, caused dissipation of initial interest and enthusiasm in applications of neural networks.
It was not until 1970s and 80s, when the backpropagation algorithm for supervised learning was documented that artificial neural networks regained their status and proved beyond doubt to be sufficiently good approach to many problems.Artificial Neural Network can be looked upon as a parallel computing system comprised of some number of rather simple processing units (neurons) and their interconnections.They follow inherent organizational principles such as the ability to learn and adapt, generalization, distributed knowledge representation, and fault tolerance.Neural network specification comprises definitions of the set of neurons (not only their number but also their organization), activation states for all neurons expressed by their activation functions and offsets specifying when they fire, connections between neurons which by their weights determine the effect the output signal of a neuron has on other neurons it is connected with, and a method for gathering information by the network that is its learning or training rule.
Architecture
From architecture point of view neural networks can be divided into two categories: feed-forward and recurrent networks.In feed-forward networks the flow of data is strictly from input to output cells that can be grouped into layers but no feedback interconnections can exist.On the other hand, recurrent networks contain feedback loops and their dynamical properties are very important.
The most popularly used type of neural networks employed in pattern classification tasks is the feedforward network which is constructed from layers and possesses unidirectional weighted connections between neurons.The common examples of this category are Multilayer Perceptron or Radial Basis Function networks, and committee machines.
Multilayer perceptron type is more closely defined by establishing the number of neurons from which it is built, and this process can be divided into three parts, the two of which, finding the number of input and output units, are quite simple, whereas the third, specification of the number of hidden neurons can become crucial to accuracy of obtained classification results.
The number of input and output neurons can be actually seen as external specification of the network and these parameters are rather found in a task specification.For classification purposes as many distinct features are defined for objects which are analyzed that many input nodes are required.The only way to better adapt the network to the problem is in consideration of chosen data types for each of For example instead of using the absolute value of some feature for each sample it can be more advantageous to calculate its change as this relative value should be smaller than the whole range of possible values and thus variations could be more easily picked up by artificial neural network.The number of network outputs typically reflects the number of classification classes.The third factor in specification of the multilayer perceptron is the number of hidden neurons and layers and it is essential to classification ability and accuracy.With no hidden layer the network is able to properly solve only linearly separable problems with the output neuron dividing the input space by a hyperplane.Since not many problems to be solved are within this category, usually some hidden layer is necessary.
With a single hidden layer the network can classify objects in the input space that are sometimes and not quite formally referred to as simplexes, single convex objects that can be created by partitioning out from the space by some number of hyperplanes, whereas with two hidden layers the network can classify any objects since they can always be represented as a sum or difference of some such simplexes classified by the second hidden layer.
Apart from the number of layers there is another issue of the number of neurons in these layers.When the number of neurons is unnecessarily high the network easily learns but poorly generalizes on new data.This situation reminds autoassociative property: too many neurons keep too much information about training set rather "remembering" than "learning" its characteristics.This is not enough to ensure good generalization that is needed.
On the other hand, when there are too few hidden neurons the network may never learn the relationships amongst the input data.Since there is no precise indicator how many neurons should be used in the construction of a network, it is a common practice to build a network with some initial number of units and when it learns poorly this number is either increased or decreased as required.Obtained solutions are usually task-dependant.
Activation Functions
Activation or transfer function of a neuron is a rule that defines how it reacts to data received through its inputs that all have certain weights.
Among the most frequently used activation functions are linear or semi-linear function, a hard limiting threshold function or a smoothly limiting threshold such as a sigmoid or a hyperbolic tangent.Due to their inherent properties, whether they are linear, continuous or differentiable, different activation functions perform with different efficiency in task-specific solutions.For classification tasks antisymmetric sigmoid tangent hyperbolic function is the most popularly used activation function:
Learning Rules
In order to produce the desired set of output states whenever a set of inputs is presented to a neural network it has to be configured by setting the strengths of the interconnections and this step corresponds to the network learning procedure.Learning rules are roughly divided into three categories of supervised, unsupervised and reinforcement learning methods.
The term supervised indicates an external teacher who provides information about the desired answer for each input sample.Thus in case of supervised learning the training data is specified in forms of pairs of input values and expected outputs.By comparing the expected outcomes with the ones actually obtained from the network the error function is calculated and its minimization leads to modification of connection weights in such a way as to obtain the output values closest to expected for each training sample and to the whole training set.
In unsupervised learning no answer is specified as expected of the neural network and it is left somewhat to itself to discover such self-organization which yields the same values at an output neuron for new samples as there are for the nearest sample of the training set.
Reinforcement learning relies on constant interaction between the network and its environment.The network has no indication what is expected of it but it can induce it by discovering which actions bring the highest reward even if this reward is not immediate but delayed.Basing on these rewards it performs such re-organization that is most advantageous in the long run [22].
The modification of weights associated with network interconnections can be performed either after each of the training samples or after finished iteration of the whole training set.
The important factor in this algorithm is the learning rate η whose value when too high can cause oscillations around the local minima of the error function and when too low results in slow convergence.This locality is considered the drawback of the backpropagation method but its universality is the advantage.
Architecture of artificial neural networks, Committee Machines
As the base topology of artificial neural network committee machines with the feed-forward multilayer perceptron with sigmoid activation function trained by backpropagation algorithm is used.
In committee machines approach, a complex computational task is solved by dividing it into a number of computationally simple tasks and then combining the solutions to those tasks.In supervised learning, computational simplicity is achieved by distributing the learning task among a number of experts, which in turn divides the input space into a set of subspaces.The combination of experts is said to constitute a committee machine.Basically, it fuses knowledge acquired by experts to arrive at an overall decision that is supposedly superior to that attainable by anyone of them acting alone.The idea of a committee machine may be traced back to Nilsson [19] (1965); the network structure considered therein consisted of a layer of elementary perceptrons followed by a votetaking perceptron in the second layer.Committee machines are universal approximators.They may be classified into two major categories: 1. Static structures.In this class of committee machines, the responses of several predictors (experts) are combined by means of a mechanism that does not involve the input signal, hence the designation "static."This category includes the following methods: • Ensemble averaging, where the outputs of different predictors are linearly combined to produce an overall output.
• Boosting, where a weak learning algorithm is converted into one that achieves arbitrarily high accuracy.
2. Dynamic structures.In this second class of committee machines, the input signal is directly involved in actuating the mechanism that integrates the outputs of the individual experts into an overall output, hence the designation "dynamic."In this research ensemble averaging category of committee machines will be used.
Ensemble averaging
Figure 1 shows a number of differently trained neural networks (i.e., experts), which share a common input and whose individual outputs are somehow combined to produce an overall output y.In this research the outputs of the experts are scalar-valued.Such a technique is referred to as an ensemble averaging method.The motivation for its use is two-fold: • lf the combination of experts in Fig. 1 were replaced by a single neural network, we would have a network with a correspondingly large number of adjustable parameters.The training time for such a large network is likely to be longer than for the case of a set of experts trained in parallel.
• The risk of overfitting the data increases when the number of adjustable parameters is large compared to cardinality (i.e., size of the set) of the training data.In any event, in using a committee machine as depicted in Fig. 3, the expectation is that the differently trained experts converge to different local minima on the error surface, and overall performance is improved by combining the outputs in some way.
The number of input terminals equaled the number of attributes in the human voice data, thus it is eleven.There are two hidden layers with eleven neurons within each of three neural networks in the committee machine for preserving generalization properties but achieving as shown in the signal flow graph in Figure 4.In this research two hidden layer, feed forward, back propagation artificial neural networks are used as the eleven committee machines.The Cardiovascular disease data was 14 dimensional.Therefore fourteen input ports equaled the number of fourteen attributes used, thus it is fourteen.There is one hidden layer with fourteen neurons within each of eleven neural networks in the committee machines for preserving generalization properties but achieving convergence during training with tolerance at most 0.14 for all training samples recognized properly.
For all structures of artificial neural networks, only one output is produced.Actually, it was possible to use a single output and by interpretation of its active state as one class and inactive output state the second class the task would have been solved as well.
RESULTS AND DISCUSSION
The data is divided into two parts.The first part, training data consists of 75 absence and 75 presence data In the training stage the eleven committee machines performances were as in Table 2. Then test data which consists of the remaining 88 absence and 64 presence data sent to eleven committee machines.
Results from the testing experiment can be seen in table 2. 20 14 It has been shown in this study that parallel neural networks in combination with a majority rule based system increase performance of true recognition rates in an imbalanced data set.In both conducted experiments all measurement parameters are improved compared to single network predictions.From the two experiments it is proven the parallel system with forward propagation of untrained data samples increases the robustness and decrease the variability as seen in the system which does not have this feature.
Despite the advantages of having an accurate system prediction, the training time and complexity of the parallel network algorithm do increase as the number of parallel networks increases [20][21].The data set is very unbalanced with regard to the class distribution.This, in combination with the small sample size, makes it difficult to train any type of classifier to predict the presence of Cardiovascular disease's disease.Out of 302 samples, 139 are with cardiovascular disease type and the remainder is of healthy character.
It implies that the baseline prediction is 46% and any prediction accuracy less than the baseline is not relevant.A common problem with imbalanced data sets is that they can increase to high false positive rates.Traditionally, the problem with false positive predictions is dealt with over-or undersampling [22].However techniques to adjust the sample distribution sometimes overweight the benefits of generalizing the classifier.Any modification to the data set is merely artificial alternatives to the problem of inadequate training data.In this paper, it has been demonstrated that parallel neural networks are strong at adjusting the imbalanced data set problem.
False positive rates up to 25 -30% of the positive class have been reported [23] in the literature.It has been demonstrated in this study that a true positive rate up to 86% of positive class is achieved by using eleven parallel networks.This is a significant improvement compared to previously demonstrated results.It has also evident that networks with forward propagation of untrained data do increase the robustness of the parallel system.However, it can be seen from table 2 and 3, that after a certain number of parallel networks, the overall accurate prediction does not increase which is a direct implication of an insignificant improvement of the MSE.
CONCLUSIONS
A system has been presented consisting of eleven parallel neural networks and a majority voting system.An empirical investigation demonstrates that it is possible to achieve >90% true positive rate for each class in the Cardiovascular Disease data set.
Fig. 1 .
Fig. 1.The distribution obtained by the use of first two principal components of absence and presence data 5. ARTIFICIAL NEURAL NETWORKS
Fig. 2 .
Fig. 2. Antisymmetric sigmoid tangent hyperbolic activation function 5.3 Learning RulesIn order to produce the desired set of output states whenever a set of inputs is presented to a neural network it has to be configured by setting the strengths of the interconnections and this step corresponds to the network learning procedure.Learning rules are roughly divided into three categories of supervised, unsupervised and reinforcement learning methods.The term supervised indicates an external teacher who provides information about the desired answer for each input sample.Thus in case of supervised learning the training data is specified in forms of pairs of input values and expected outputs.By comparing the expected outcomes with the ones actually obtained from the network the error function is calculated and its minimization leads to modification of connection weights in such a way as to obtain the output values closest to expected for each training sample and to the whole training set.In unsupervised learning no answer is specified as expected of the neural network and it is left somewhat to itself to discover such self-organization which yields the same values at an output neuron for new samples as there are for the nearest sample of the training set.Reinforcement learning relies on constant interaction between the network and its environment.The network has no indication what is expected of it but it can induce it by discovering which actions bring the highest reward even if this reward is not immediate but delayed.Basing on these rewards it performs such re-organization that is most advantageous in the long run[22].The modification of weights associated with network interconnections can be performed either after each of the training samples or after finished iteration of the whole training set.The important factor in this algorithm is the learning rate η whose value when too high can cause oscillations around the local minima of the error function and when too low results in slow convergence.This locality is considered the drawback of the backpropagation method but its universality is the advantage.
Fig. 3 .
Fig. 3. Block diagram of a committee machine based on ensemble-averaging.
Fig. 4 .
Fig. 4. Signal flow graph of an expert neural network For preserving generalization properties but achieving convergence during training with tolerance at most 0.15 for all training samples recognized properly.Algorithm results in a decision about attribution of paragraphs whose textual description entered as inputs.In this research two hidden layer, feed forward, back propagation artificial neural networks are used as the eleven committee machines.The Cardiovascular disease data was 14 dimensional.Therefore fourteen input ports equaled the number of fourteen attributes used, thus it is fourteen.There is one hidden layer with fourteen neurons within each of eleven neural networks in the committee machines for preserving generalization properties but achieving
Table 1 :
Table describing the 14 attributes that are not used.
Table 2 :
Performances of eleven committee machines in training.
Table 3 :
Performance measurements of eleven committee machines and majority vote. | 2019-02-13T14:07:57.001Z | 2013-03-30T00:00:00.000 | {
"year": 2013,
"sha1": "2da0a14c2515d0830f9e10807bf6c90e7336037e",
"oa_license": "CCBY",
"oa_url": "http://scjournal.ius.edu.ba/index.php/scjournal/article/download/49/49",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e12ea6b85ba35111b12de54861baf5988eeb6942",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
155079023 | pes2o/s2orc | v3-fos-license | African Generation Y Students ’ Attitudes towards Personal Financial Planning
Personal financial satisfaction arises from the ability to manage financial resources effectively. Individuals today face great challenges in managing their finances due to increased exposure to marketing activities, stemming from increased competition for consumers’ money. However, even though various business courses comprise financial management content that focuses on the importance of managing and maximising wealth, students express little concern about their financial status, future wealth and retirement planning, This article reports on a study conducted in South Africa to determine African Generation Y students’ attitudes towards personal financial planning. The South African Generation Y cohort, defined as individuals born between 1986 and 2005, accounted for an estimated 38 percent of the country’s population in 2013. In terms of race, the African portion of this cohort accounted for approximately 83 percent of this cohort and 32 percent of the total South African population. Therefore, the significant size of the African Generation Y market makes them salient to industry professionals, including financial institutions and those involved in financial management, especially in financial planning. Despite the importance of this market segment, their consumer behaviour remains under researched in general, specifically concerning their attitudes towards personal financial planning. In order to address this shortfall in the literature, a structured, selfadministered questionnaire was used to gather data on personal financial planning attitudes from a non-probability convenience sample of 500 African Generation Y students across two South African public higher education institutions’ campuses located in the Gauteng province. The collected data was analysed using t-tests. According to the results, African Generation Y students have a significant positive attitude towards personal financial planning. This article concludes with recommendations for financial institutions, including banks, insuranceand investment companies regarding effective ways to convey financial knowledge and product information to deliver improved financial service to this segment.
Introduction
The effective and efficient management of personal finances is critical for everyone (Chinen & Endo, 2012), particularly in a world where uncertainties prevail (Mazzucato, Lowe, Shipman & Trigg, 2010).Owing to continuous change, individuals are frequently confronted with new financial challenges, which, ultimately, culminate in uncertainties concerning their financial position and future financial well-being (Swart, 2012).Having a low level of debt, an active savings account and a retirement plan, as well as following an expenditure plan, will lead to financial wellness, which demonstrates an active state of financial wealth (Rutherford & Fox, 2010;Chinen & Endo, 2012).A comprehensive financial plan makes individuals attentive when dealing with financial issues and acts as a guide when making financial decisions, underlying the consequences of those decisions on other financial areas (Botha, Du Preez, Geach, Goodall, Rossini & Rabenowitz, 2012).James, Leavell and Maniam (2002) opine that many households express no intent to save at all, choosing to rather avoid the many warning signs, such as the decreasing buying power of money, unemployment and increased financial risk that abound in the financial-and economic environment (Shim, Xiao, Barber & Lyons, 2009).Students are no different from the rest of the population in this regard.Worryingly though, retirement planning, for the vast majority, ranks very low on their list of priorities, if it exists at all.Furthermore, individuals are reluctant to plan for the possibility of early retirement brought on by limited employment opportunities (Van Gijsen, 2002).Swart (2012) indicates that people, including students, take risks with their financial freedom either because they lack any understanding of the importance of financial management or because they simply choose to ignore financial matters because this makes them apprehensive.There is evidence that inadequate financial management ultimately results in high financial debt, severe credit card usage, high stress levels as well as low financial security (Sabri, MacDonald, Hira & Masud, 2010).Therefore, managing financial resources should be a priority (Falahati, Paim, Ismail, Haron & Masud, 2011).
Published literature on the South African Generation Y cohort's (individuals born between 1986 and 2005) (Markert, 2004;Eastman & Liu, 2012) consumer behaviour is limited and none focuses specifically on this segment's attitudes towards personal financial planning.In the South African market, the African portion of this cohort (hereafter referred to as African Generation Y) is of particular interest to marketers and professionals, including those in financial institutions and those involved in financial management, especially financial planning, given that they represent 83 percent of this generational cohort (Statistics South Africa, 2013).As a tertiary qualification is generally associated with a higher earning potential and higher social standing within a community (Schiffman, Kanuk & Wienblit, 2010;Bevan-Dye, Garnett & De Klerk, 2012), those African Generation Y university graduates are likely to be of particular interest to marketers of financial products and are also likely to act as role models amongst their peers.Therefore, the primary aim of the study reported on in this article was to determine African Generation Y students' attitudes towards personal financial planning and thereby address the gap in the literature.
Personal financial management
Personal financial management is concerned with managing personal finances through developing a strategic plan for productively managing an individuals or family unit's personal income, lifestyle expenditures and assets, and includes assisting them achieve their lifetime goals, taking into account various financial risks and future life events (Financial Planning Institute of South Africa, 2013b).Altfest (2004) states that personal financial management involves a process of managing financial resources in order to achieve personal economic satisfaction and indicates that because individuals move through different life cycle stages, which causes their goals and needs to evolve, personal financial management has become a self-motivated process.Boon, Yee and Ting (2011) maintain that financial freedom does not necessarily equate to great wealth but rather that individuals optimally utilise their income, irrespective of the level of that income.
Management, including personal financial management, is a process that consists of a continuous cycle of four core elements, namely planning, organising, leading and controlling (Robbins & DeCenco, 2005).The planning component of personal financial management gives direction to an individual's finances, minimises risk and uncertainty, and aids in avoiding the need for crisis management.It involves setting measurable and attainable goals and developing strategies to achieve those goals (Van Rensburg, Meintjies, Kroon, M ller, Lancaster, Lessing & Rankhumise, 2008).The organising process component of personal financial management commences once individuals have set financial goals and established strategies for achieving those goals.During this process, resources are allocated in a manner that allows for the execution of the planned strategies (Swart, 2012).Individuals should lead the management of their personal finances in such a way as to move towards future financial freedom (Banhegyi, Bosch, Botha, Booysen, Cunningham, Lotz, Musengi, Smith, Visser & Williams, 2007).It is imperative that individuals control their financial resources by continuously comparing their financial objectives with the actual performance of their financial assets, specifically by being vigilant to any serious deviations resulting from unforeseen circumstances such as investment losses or a tax account in arrears (Van Rensburg et al., 2008).
Individuals and households should plan and manage their finances in order to avoid the burden of debt.Managing finances forms an integral part of all individuals' everyday lives, from students receiving bursaries or pocket money, to working individuals earning an income, to retired people receiving a monthly pension or retirement annuity payment.Successfully managing personal finances require not only having an understanding of certain financial concepts, but also knowing how to budget and being aware of the ratio between one's assets, savings and debt.A major hurdle in this regard is that many individuals have no formal training on how to manage their finances effectively (Botha & Musengi, 2012).ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences Most individuals falsely assume that personal financial planning is only for the affluent members of society.Personal financial planning should be a necessity for everyone, irrespective of their personal wealth.Notwithstanding this, individuals with a significant amount of financial resources are likely to benefit even more from personal financial planning, as it provides assistance on how to spend and invest prudently (Gitman & Joehnk, 2008).According to Koh and Fong (2011), personal financial planning is a necessity if the individual desires to improve their standard of living, minimise their possibility of financial ruin, invest optimally and accumulate adequate wealth over time.Garman and Forgue (2008) state that being more astute regarding personal financial matters when facing the financial challenges, responsibilities and opportunities that life offers typically leads to several financial benefits.Examples of such benefits include marginal credit costs, reduced income taxes, better mortgage rates, lower insurance premiums and the like.In addition, financially astute individuals in more likely to make successful investment choices, better plan for their retirement and ensure that their estate is in order in the event of their death.Swart (2012) defines personal financial planning as the organisation of an individual's financial and personal data for establishing a strategic plan to manage income, assets and liabilities in a constructive manner to satisfy short-and long-term goals and objectives.Understanding money matters and the financial management process are prerequisites for efficient personal financial planning (Boon et al., 2011).Efficient personal financial planning requires constantly trying to predict future events and paying attention to future financial needs as early as possible (Swart, 2009).Unfortunately, many individuals make personal financial decisions based purely on chance or on the informal advice of friends and/or family.Such a casual attitude towards money would include having a savings plan based simply on the surplus funds at the end of the month (Murphy & Yetmar, 2010).
Globalised capital markets offer a constantly increasing diversity of financial products and investment opportunities, making personal financial planning an imperative to achieving personal financial goals (Boon et al., 2011).The development of an effective financial plan requires knowledge of the various personal financial planning areas, as each has potentially extensive positive and negative financial repercussions (Swart, 2012).Whilst the areas of financial planning include career planning, income tax planning, estate planning, investment planning, insurance planning, credit planning, retirement planning project planning, family planning, productivity planning, emigration planning and business planning (Cooper & Worsham, 2002;Warschauer, 2002;Lai & Tan, 2009;Swart, 2012), this study only focused on credit, insurance, investment, retirement and estate planning.
Credit planning involves prudently and purposefully incurring debt for the purpose of satisfying individual needs and achieving financial goals, while simultaneously managing cash inflows and outflows.This necessitates the development of a personal financial budget (Swart, 2012).Credit planning strategies include paying off loans, overdrafts, store and credit cards with the highest interest rate first, as well as selling non-essential assets or liquidating investments to settle debts such as mortgage bonds in order to benefit from early payment cost of capital savings (Botha et al., 2012).In summary, credit planning relates to moderation, affordability and the management of debt to avoid long-term negative consequences.
Insurance planning is the process of recognising, investigating and prioritising risks, followed by the process of employing strategies to mitigate, monitor and control the likelihood and/or consequence of unfortunate occurrences (Botha, Du Preez, Geach, Goodall, Palframan, Rossini & Rabenowitz, 2011).Insurance-related risks are planned by employing several strategies, such as risk avoidance (actions taken to avoid risky financial circumstances), risk reduction (actions taken to alleviate exposure to financial risks) and risk transference (purchasing short-and long-term insurance are recognised methods of transferring risks) (Botha et al., 2012).In insurance planning, it is important to balance the risk and the cost of insuring against the risk.
Investment planning is the utilisation of funds with the intention of earning an income from those funds (Swart, 2012).Prior to making an investment decision, individuals need to consider their short-, medium-and long-term financial goals, financial risks (such as death or disease) and financial needs (such as a life policy or medical scheme/insurance).Thereafter, they need to be able to evaluate and compare different investments and be knowledgeable on the different types of investment options available (Swart, 2009).Swart (2012) emphasises that investment planning is one of the principal areas of personal financial planning because it is a fundamental part of retirement planning, has a direct influence on safeguarding future financial well-being and is important for the achievement of short-, medium-and longterm goals.
Retirement planning is more complex than merely contributing to a pension, provident or retirement annuity fund, as it requires knowledge of tax laws, compound interest, present and future time value of money, and investment strategy (Botha et al., 2012).Retirement planning is about saving an amount of money whilst working in order to provide an income after retirement (Van Gijsen, 2002).The sooner an individual starts planning and saving for retirement, the greater the amount that will be available for retirement This suggests that individuals should invest as much as possible, as soon as possible and for as long as possible (Biehler, 2008).Swart (2012) proposes three steps when planning for retirement.First, establish retirement goals, such as maintaining an equal standard of living as before retirement.Secondly, establish an amount of money required to attain the set goals.This amount is determined by assessing the expenses and income during retirement.Lastly, prepare an investment portfolio within the constraints of the personal financial budget.
Estate planning involves the organisation, management and securement, and disposition of an individual's estate so that the individual, his/her family, and other heirs may benefit and continue to benefit to the maximum from the individual's estate and assets during the individual's lifetime and after death, irrespective of when death may occur (Botha et al., 2012).Comprehensive estate planning requires timely planning (during the estate owner's life), testamentary planning (in the individual's will) and other planning (such as insurance planning).Estate planning is a continuous process and consists of two phases, as indicated by Botha et al. (2011).The first phase of the estate planning process concerns planning during the individual's life and includes preserving the value of an estate at its present value to attain estate duty and capital gains tax savings, while at the same time ascertaining that minimum liquidity difficulties arise.After the death of the individual, the second phase of the estate planning process commences, and involves the implementation of the provisions of the will of the individual.Swart (2012) explains that personal financial planning has an effect on all individuals and with knowledge of basic financial issues, individuals can take responsibility for a promising financial future that enables the transference of skills to others especially their children, guaranteeing a positive financial future for the younger generation.Worryingly though is that, according to a study done by the ANZ Banking Group in Australia and New Zealand, 37 percent of the participants (adults aged between 18 and 70 years and older) do not know the amount of money needed to fund a comfortable retirement (Louw, 2009).According to Swart (2012), less than one out of every ten individuals in South Africa is financially independent when retiring.These statistics are suggestive of the reality that most individuals do not know what personal financial planning involves or how to embark on such planning.
Generation Y
In generational studies, the youth are labelled as Generation Y and is classified as those individuals born between 1986 and 2005 (Markert, 2004;Eastman & Liu, 2012), which, in 2013, puts them at nine to 28 years of age.South Africa's population totalled around 52 981 991 in 2013, of which an estimated 38 percent formed part of the Generation Y cohort.In terms of race, the African portion of this Generation Y cohort accounted for approximately 83 percent of the South African Generation Y cohort and 32 percent of the total South African population (Statistics South Africa, 2013).Cui, Trent, Sullivan and Matiru (2003) claim that Generation Y is widely considered the next big generation with powerful aggregate spending.Cox, Kilgore, Purdy and Sampath (2008) opine that Generation Y members are positioned to become the wealthiest generation thus far.Furthermore, the financial appetite of Generation Y is growing owing to the fact that more members own a cheque account and a credit card.
According to Shaw and Fairhurst (2008) and Schlitzkus, Schenarts and Schenarts (2010), the Generation Y cohort were the first generation exposed to the Internet, mobile phones, convergent technologies and various multimedia platforms, including computer-generated social media networks such as Facebook, computer-generated social reporting such as Twitter and computer-generated social media such as YouTube.Given the size of the Generation Y market and its member's tendency to utilise technology to manage personal finances, financial institutions must start planning for the future.Technologically-innovative financial institutions that take advantage of technology that connects with Generation Y in ways with which the members are familiar with, such as online messaging, social networking and targeted offerings to mobile phones, will be successful in their dealings with Generation Y (Cox et al., 2008;Constantine, 2010).Robson (2012) concur that technology will act as a catalyst in creating a differentiating experience for Generation Y in managing personal finances.
As highlighted by Bevan-Dye and Surujlal (2011) and Bevan-Dye et al. ( 2012), Generation Y, in particular South Africa's African Generation Y members, are viewed as being optimistic, self-assured, education-directed and highly motivated individuals.It is essential that this generation engage in personal financial planning and management to secure a stable financial future.Unfortunately, these members, as indicated by Borden, Lee, Serido and Dawn (2008), have more lenient attitudes towards debt, meaning that debt instalments would possibly increase.Moreover, one could infer that this Generation Y cohort would have positive attitudes to the use and misuse of credit cards.Generation Y face the challenge of making financial decisions in an increasingly complex financial environment and, therefore, the management MCSER Publishing, Rome-Italy Vol 5 No 21 September 2014 115 of finances in all areas of personal financial planning should be improved (Cudmore, Patton, Ng & McClure, 2010).
Methodology
This study adopted a quantitative approach to determine African Generation Y students' attitudes towards personal financial planning.
Sample
For this study, the target population was defined as full-time undergraduate African Generation Y students, aged between 18 and 24 years, registered at South African higher education institutions (HEIs) in 2013.From the sampling frame of the 23 registered South African public HEIs in 2013 (Higher Education in South Africa, 2013), a judgement sample of two HEIs campuses located in the Gauteng province was selected, one from a traditional university and the other from a university of technology.A non-probability convenience sample of 500 African Generation Y full-time undergraduate students was then drawn from these two campuses.
Measurement instrument and data collection procedure
A structured self-administered questionnaire was utilised to gather the required data.This questionnaire included an existing scale from previously published research.The Boon et al. (2011) Financial Planning Scale was adapted and used to measure African Generation Y students' attitudes towards personal financial planning in the South African context.This 30-item scale comprises six constructs, namely the financial planning process (five items), credit planning (five items), insurance planning (five items), investment planning (eight items), retirement planning (three items) and estate planning (three items).Responses were measured on a six-point Likert scale (1= strongly disagree, 6= strongly agree).The questionnaire included a section designed to gather relevant demographical data from the participants.In addition, the questionnaire was accompanied by a cover letter explaining the purpose of the study and requesting participation from the students.
The questionnaire was piloted on a convenience sample of 40 students on a South African HEI campus that did not form part of the sampling frame, in order to ascertain reliability.The scale returned a Cronbach alpha value of 0.699, which are within the recommended Cronbach alpha level of 0.6 (Malhotra, 2010).The questionnaire was then administered to the identified sample.Lecturers at each of the two HEIs were contacted and asked if they would allow the questionnaire to be distributed to their students either during class or after class.Once permission had been obtained, fieldworkers distributed the questionnaire to students at the two campuses, who were duly informed that participation was on a voluntary basis only.
Sample characteristics
Of the 500 questionnaires distributed, 385 completed and usable questionnaires were returned.The majority of the participants in the sample indicated being 21 years of age, followed by those who indicated being 20 years of age and 19 years of age.The sample comprised more female participants than male participants.Concerning the participants' year of study, the majority indicated being first-year students, followed by those who indicated that they were in their third-and second year.With the exception of the Western Cape, each of South Africa's provinces was represented, with the majority of participants indicated Gauteng as their province of origin, followed by Limpopo.Table 1 provides the demographic information of the sample's participants.
Reliability and validity
In the main survey, an acceptable Cronbach alpha value of 0.820 was computed for the overall Financial Planning Scale.
The Cronbach alphas of the individual constructs ranged between 0.289 for retirement planning, 0.524 for credit planning, 0.628 for investment planning, 0.711 for both insurance planning and financial planning and 0.817 for estate planning.The retirement planning construct was excluded from further analysis given its unacceptably low Cronbach alpha value.An assessment of the credit card planning construct determined that the deletion of two items (convenience of credit cards and personal loans) would increase the Cronbach alpha to an acceptable 0.736 level.Following the removal of these two items, the average inter-item correlation values of each construct were computed in order to assess the construct validity of the scale.The average inter-item correlation coefficients for each of the five remaining construct all fell within the recommended range of 0.15 and 0.50, thereby suggesting both convergent and discriminant validity (Clark & Watson, 1995).
One sample t-test
Means above 3 were computed for all five constructs in the attitudes towards Personal Financial Planning Scale.In order to evaluate whether these calculated means were statistically significant, a one-tailed one-sample t-test was conducted.The level of significance was set at the typical 0.05 ( = 0.05) and the expected mean was set at mean > 3.As reported on in Table 2, significant p-value (p=0.000< 0.05) were computed for the five constructs of personal financial planning, indicating statistical significance.This infers that African Generation Y students do exhibit a statistically significant positive attitude towards the financial planning process, credit planning, insurance planning, investment planning and estate planning.Table 2 presents the computed means, standard deviations, standard errors, t-values and p-values.An independent-samples t-test was conducted to compare the personal financial planning mean scores of males and females.As is evident in Table 3, significant differences were found between males and females concerning the financial planning process (p = 0.006 < 0.05) and investment planning construct (p = 0.031 < 0.05).For both the financial planning process and investment planning, males scored significantly higher.This suggests that, in comparison to their female counterparts, African Generation Y students take financial planning and investment planning more seriously.There were no significant differences between males and females' attitudes towards credit planning, investment planning and estate planning.The results are reported on in Table 3.
Discussion
This study investigated African Generation Y students' attitudes towards personal financial planning in terms of the financial planning process, credit planning, insurance planning, investment planning and estate planning in South Africa.In order to evaluate students' attitudes towards personal financial planning, this study used the Financial Planning Scale of Boon et al. (2011).The results of this study provide valuable insights into the attitudes of African Generation Y students towards personal financial planning.The findings of the study suggest that African Generation Y students have a positive attitude towards personal financial planning.This positive attitude, coupled with the significant size of the African Generation Y cohort and the future higher earning potential of role model status of graduates make African Generation Y students an important target segment for a range of financial institutions, including banks, insurance companies, investment companies, and the like.Given that this generational cohort are known to be technologically astute and comfortable using online and mobile communication, financial institutions should incorporate new digital platforms to reach this target segment.Establishing a Facebook page and designing mobile telephone advertisements that appeal specifically to this ethnic and age cohort, whether it be in terms of the music and visual copy or even the use of a local African celebrity, will help financial institutions engage better with this segment.In this regard, many financial institutions are in the process of trying to move their customers online, and this cohort are likely to be much easier to convert to such technological advancements as online trading, mobile banking and online credit card and loan applications.
Students' attitudes towards estate planning were ranked the highest, suggesting that African Generation Y students perceive this construct to be the most important part of financial planning.Students find it important to have a will and that estate planning is essential.Investment planning was ranked the second highest suggesting that students perceive investment as essential in personal financial planning.The evidence in the sample suggests that students consider investing as important and will carefully evaluate the different investment alternatives available before investing.Furthermore, considering the opinion of friends and/or family before investing ranked the lowest in this construct.Insurance planning was ranked the third highest indicating that students perceive insurance as valuable.The evidence in the sample suggests that students find life insurance essential as well as planning to have sufficient life insurance.Credit planning ranked the fourth highest construct with avoiding mixing out or going over the limit on their accounts ranked the highest in this construct followed by paying their accounts on time.Students' attitudes towards the financial planning process were ranked the lowest, suggesting less positive attitudes towards the financial planning process.The evidence in the sample suggest that although students indicated knowing what personal financial planning is and having positive attitudes towards setting personal financial goals and objectives, they do not implement a personal financial plan with the help of experts.
The results of this study suggest attitude differences pertaining to the financial planning process and investment planning between male and female students.As such, financial institutions should appeal differently to males and females when marketing financial products and services.Advertisements depicting strong independent African women managing their own personal finances and making their own investment decisions may help to make African Generation Y female students more aware of the importance of taking charge of their own future financial well-being.
Like most studies, several limitations can be identified within this study, consequently presenting several opportunities for future research.Within this study, a non-probability convenience sampling approach was applied to survey the study's participants.Therefore, there is a necessity to take care in interpreting the results.Furthermore, the study lacks the accurateness of a longitudinal study since this study made use of a single cross-sectional design.Given the dearth of research conducted on the consumer behaviour of the African Generation Y cohort in South Africa, especially related to personal finance, future research in this area is recommended.
Conclusion
In an economy characterised by high interest rates, inflation, unemployment, political instability and various other economic downfalls, individuals' money matters are constantly under threat.Personal financial management, with reference to personal financial planning, is a recognised intervention to secure a promising and stable financial standing, in both the short-and long-term, and may combat the adverse effects of these economic factors.The Generation Y consumer, especially the African Generation Y consumer, is regarded as the future of South Africa and their attitude towards personal finance is expected to shape the continually changing financial and economic environment, especially concerning personal financial management.The findings of this study indicate that African Generation Y students have significant positive attitudes towards personal financial planning.Through better understanding students' financial management, the results of this study may aid in creating awareness of certain shortfalls in African Generation Y students' personal financial management.This in turn will aid financial institutions and professionals in gauging effective ways to convey financial knowledge and product information to this target market to deliver improved financial service.This is likely to benefit the nation as a whole.Industry professionals, including financial institutions and those involved in financial management, especially in financial planning, are advised to consider the student portion of this age and ethnic cohort in particular given that African Generation Y students with a tertiary education are likely to manifest as pertinent opinion leaders and financial trendsetters amongst their peers.
Table 1 .
Sample description
Table 2 .
African Generation Y students' attitudes towards personal financial planning
Table 3 .
Gender differences on attitudes towards personal financial planning | 2017-09-08T15:56:03.044Z | 2014-09-06T00:00:00.000 | {
"year": 2014,
"sha1": "43e725c9f4c8ebc378f40f773a9b8c43caebcd5a",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/4184/4094",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "43e725c9f4c8ebc378f40f773a9b8c43caebcd5a",
"s2fieldsofstudy": [
"Business",
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
263926557 | pes2o/s2orc | v3-fos-license | Vascular endothelial growth factors C and D and lymphangiogenesis in gastrointestinal tract malignancy
Vascular endothelial growth factor-C (VEGF-C) and VEGF-D are members of the VEGF family of cytokines and have angiogenic and lymphangiogenic actions. In gastric adenocarcinoma, VEGF-C mRNA and tissue protein expression correlate with lymphatic invasion, lymph node metastasis and in some reports, venous invasion and reduced 5-year survival. Patients with gastric adenocarcinomas containing high levels of VEGF-C expression have significantly reduced 5-year survival rates, and VEGF-C expression is an independent prognostic risk factor for death. The role of VEGF-C in oesophageal squamous and colorectal cancer and VEGF-D in colorectal cancer is not clear, with conflicting reports in the published literature. In order to exploit potential therapeutic applications, further research is necessary to define the precise roles of these cytokines in health and disease.
Lymphangiogenesis, the development of new lymph vessels, is a relatively new area of clinical investigation. Increased interest in this field has been heightened by the discovery of new vascular endothelial growth factor (VEGF) family members, which possess lymphangiogenic roles.
Vascular endothelial growth factor-C (VEGF-C) and VEGF-D are secreted glycoproteins that are structurally similar, sharing areas of homology with one another and with the angiogenic growth factor VEGF-A (Joukov et al, 1996;Achen et al, 1998). They are specific ligands for the tyrosine kinase receptor, vascular endothelial growth factor receptor (VEGFR)-3 (flt-4) (Joukov et al, 1996;Achen et al, 1998). Both cytokines are subject to proteolytic processing, which also enables them to act as ligands for VEGFR2 (KDR/flk-1) Stacker et al, 1999). Vascular endothelial growth factor receptor 2 is expressed on vascular endothelial cells and is essential for the embryonic differentiation of endothelial and haematopoietic cells and formation of blood vessels (reviewed in Veikkola et al, 2000). Vascular endothelial growth factor receptor 3 is expressed on vascular endothelium early in development and on angiogenic endothelium, but is mainly restricted to the lymphatic endothelium in the adult (Kaipainen et al, 1995). Consequently, VEGF-C and D are implicated through their receptor affinities in angiogenic and lymphangiogenic pathways in health and disease (Stacker et al, 2002).
ROLES OF VEGF-C AND VEGF-D
Study of the lymphatic system and lymphatic endothelial cells has been limited by a lack of specific lymphatic vessel markers, lack of lymphatic endothelial cells for culture and limited animal models. These problems are currently being overcome with a variety of methods. The recent discovery of specific lymphatic vessel markers, such as the hyaluronan receptor LYVE-1, podoplanin and Prox-1, new antibodies to these markers and antibody combinations has aided the identification of lymphatic vessels in histological specimens (Stacker et al, 2002) (Table 1). The exploitation of the differential expression of these new specific cell surface markers by lymphatic and blood vascular endothelial cells has allowed the separation of stable lymphatic cell populations for study (Podgrabinska et al, 2002). Animal models have been adapted from angiogenesis research and specific tumour, transgenic and knock-out models developed.
Our current understanding of the roles of VEGF-C and VEGF-D is derived mainly from in vitro and in vivo studies. In vitro studies have shown that VEGF-C and VEGF-D exhibit mitogenic effects for vascular and lymphatic endothelial cells and survival-promoting abilities for lymphatic endothelial cells through VEGFR3 Achen et al, 1998;Marconcini et al, 1999;Veikkola et al, 2001). Both growth factors promote angiogenesis in in vitro assays (Joukov et al, 1996;Joukov et al, 1997;Marconcini et al, 1999). Vascular endothelial growth factor-C promotes the formation of capillary-tube structures by lymphatic endothelial cells, but not blood vascular endothelial cells, in a collagen sandwich assay (Podgrabinska et al, 2002).
In vivo studies, using models adapted from angiogenesis research, have confirmed the angiogenic abilities of VEGF-C and VEGF-D and the lymphangiogenic effect of VEGF-C (Oh et al, Received 16 January 2003;revised 13 May 2003;accepted 21 May 2003accepted 21 May 1997. Transgenic mouse models, which overexpress VEGF-C or VEGF-D in the epidermis, have shown cytokine-dependent, VEGFR3-mediated dermal lymphatic vessel enlargement and lymphatic endothelial cell proliferation without alteration in blood vasculature Makinen et al, 2001;Veikkola et al, 2001). Various tumour models have been constructed in which overexpression of VEGF-C or VEGF-D is demonstrated. These studies consistently show increased aggressiveness of the transfected cancer cell lines, intratumoural lymphangiogenesis, dilated and increased numbers of peritumoural lymphatics, enhanced rates of lymph node metastasis and increased tumour angiogenesis Mandriota et al, 2001;Skobe et al, 2001).
Despite the implication of VEGF-C and VEGF-D in lymphangiogenic and angiogenic pathways in these studies, the role of the growth factors in the progression of human malignancy is unclear and the existence of functional lymphatics and lymphangiogenesis in human malignancy has been debated (Leu et al, 2000;Clarijs et al, 2001;Padera et al, 2002). Recent studies in head and neck cancer (Beasley et al, 2002;Maula et al, 2003) and melanoma (Straume et al, 2003) have demonstrated the existence of proliferating intratumoral lymphatic vessels. Further research is required to determine whether this is the case for all the different human malignancies that spread predominantly by the lymphatic route. The situation is likely to be clarified further by the use of antibodies and antibody combinations for the more specific lymphatic markers in conjunction with functional assays.
VASCULAR ENDOTHELIAL GROWTH FACTOR-C AND VEGF-D IN HUMAN MALIGNANCIES
The dissemination of malignant cells to the regional lymph nodes is an early step in the progression of many common solid tumours and is an important determinant of prognosis. Positive associations have been found between the expression of VEGF-C in human malignant tissue with adverse clinicopathological features including lymphatic invasion and lymph node metastasis. Expression of VEGF-C mRNA is increased in a variety of human malignancies (Salven et al, 1998). Tumour types investigated include breast, gastric, colorectal, oesophageal, prostate, pancreas, cervical, thyroid, non-small-cell lung cancers, lung adenocarcinoma and laryngeal cancers. Clinically important areas of interest are the association between VEGF-C and -D expression, intra-and peritumoral lymphatic density, lymphatic and venous invasion, lymph node metastasis and survival.
Methodological considerations
Many published reports conflict in their outcomes and conclusions. This may be partly explained by the use of different methodological tools and assumptions by their authors.
Immunohistochemical techniques and microvessel counting examine the tissue as near its condition in vivo as possible. Even so, results obtained examining malignant tissue at the invasive edge of tumours may not concur with results from central and superficial parts of the tumour (Furodoi et al, 2002). Scoring methods for both immunohistochemical staining and vessel counting vary between studies, with consequent difficulties in the extrapolation of results. Furthermore, the subjective nature of assessment of staining intensity and the frequent lack of positive or negative tissue controls in immunohistochemical analyses can confound analysis.
Studies examining mRNA levels provide an estimate of overall expression in the tissue fragment analysed, including tumour cells, stroma and normal mucosa, as RNA extraction necessarily entails tissue disruption. The nature of the interaction between expressed cytokines and the tumour microenvironment is at the cellular and paracrine level (Furodoi et al, 2002). Consequently, analysis of global tumour mRNA levels may miss subtleties of tissue expression that are crucial for tumour behaviour. The expression of mRNA in a tissue fragment may not necessarily equate with the expression of protein by the tumour.
Evidence for tumour-related lymphangiogenesis is derived from the presence of intratumoral lymphatics in xenograft studies. However, these vessels may be trapped in the tumour mass as a consequence of the methodology of model construction. Consequently, studies involving transgenic animals overexpressing VEGF-C, in which dilation of peritumoral lymphatics are seen (Mandriota et al, 2001) may reflect the situation in spontaneously arising human tumours more accurately .
Further discussion will focus on the current evidence for the role of VEGF-C and VEGF-D and their signalling receptors for the common sites of malignancy of the gastrointestinal tract (Table 2).
Gastric cancer
Gastric cancer is a leading cause of cancer death worldwide. Lymph node status is important in the prediction of prognosis. Potential molecular markers that predict lymphatic involvement would improve the clinical management of this disease. The role of VEGF-C in predicting lymphatic invasion and lymph node metastasis in gastric cancer has been investigated in several studies (Table 2). There are no studies that have examined the role of VEGF-D in gastric cancer. Immunohistochemical analysis of tumour tissue has demonstrated that VEGF-C immunoreactivity is restricted to gastric cancer cells and is observed diffusely throughout the cytoplasm (Yonemura et al, 1999(Yonemura et al, , 2001Ichikura et al, 2001). The percentage of gastric tumours that are positive for VEGF-C protein expression varies from 26 to 51% (Table 2) (Yonemura et al, 1999;Ichikura et al, 2001;Kabashima et al, 2001;Takahashi et al, 2002), although this may be accounted for in part by the use of varying methodology as discussed.
Lymphatic invasion and lymph node status correlate positively with tissue expression of VEGF-C in gastric cancer (Yonemura et al, 1999;Ichikura et al, 2001;Kabashima et al, 2001;Amioka et al, 2002;Takahashi et al, 2002) (Table 2). In addition, positive VEGF-C tissue expression in early gastric cancer (confined to the mucosa or submucosa) was significantly associated with lymphatic invasion, potentially helping to predict those individuals who would benefit from more or less extensive surgical resections (Kabashima et al, 2001). Similar associations have been demonstrated concerning the expression of VEGF-C mRNA expression in gastric cancer tissue. Malignant tissue expressed increased VEGF-C mRNA compared with adjacent normal mucosa (47 vs 13% (Yonemura et al, 1999); 55 vs 13% (Yonemura et al, 2001)). Furthermore, positive lymph node status, lymphatic and venous invasion were also associated with expression of VEGF-C mRNA (Yonemura et al, 1999).
The clinical impact of the association between VEGF-C expression and prognosis is not fully understood (Table 2). Nonsignificant trends towards reduced survival in VEGF-C expressing gastric cancers have been found (Ichikura et al, 2001). However, in 117 patients with gastric cancer, Yonemura et al (1999) demonstrated that high levels of VEGF-C expression were associated with poorer prognosis and decreased survival. Further significant differences in survival associated with VEGF-C status have been reported by Takahashi et al (2002) in a group of 65 cancer patients. A potentially important clinical finding of this study was the negative correlation of dendritic cell density with VEGF-C expression in the tumour. The effect of VEGF-C on survival may be due, in part, to its regulatory function on dendritic cells with potential reduced immunosurveillance of the tumour (Kabashima et al, 2001).
In contrast to VEGF-C, VEGFR3 immunoreactivity in gastric tumours is restricted to endothelial cells of mucosal and submucosal vessels that are regarded primarily as lymphatic vessels but also to a very few small blood vessels. Consequently, the majority of VEGFR3-positive vessels in gastric cancer are considered as lymphatics (Yonemura et al, 1999(Yonemura et al, , 2001. A positive correlation between VEGFR3 and VEGF-C mRNA expression was seen in gastric cancer tissue specimens (Yonemura et al, 1999(Yonemura et al, , 2001. Microvessel counts for VEGFR3 positive vessels showed a significant increase in VEGF-C mRNA positive tumours compared to VEGF-C mRNA negative tumours (6.9676.05 vs 2.1672.00, Po0.001). However, there was no overall increase in the VEGFR3 positive vessel count in tumour stroma compared with normal gastric mucosa when both VEGF-C mRNA positive and negative tumours were considered together (4.6275.85 vs 2.4871.64, P ¼ 0.067) (Yonemura et al, 2001). Similar increases in VEGFR3 positive vessel counts are seen in gastric cancers that are lymph node positive, show lymphatic invasion or are poorly differentiated (Yonemura et al, 2001).
In summary, in gastric cancer, expression of VEGF-C mRNA is higher in tumour than in normal mucosa. Vascular endothelial growth factor-C mRNA and immunohistochemically detected tissue expression of the protein in gastric cancer correlate with lymphatic invasion and lymph node metastasis and in some studies, venous invasion with reduced survival (Table 2). Vascular endothelial growth factor receptor 3 expression is mainly found on lymphatic vessels in gastric tumours and VEGFR3 mRNA levels and tissue expression parallel that of VEGF-C. These results suggest that VEGF-C and VEGFR3 act together in a paracrine fashion in the microenvironment of the gastric tumour.
Oesophageal cancer
Oesophageal cancer has a poor prognosis, which is dependent on the presence of lymph node metastases. Limited and conflicting evidence exists for the role of VEGF-C in oesophageal cancer and no research is available concerning VEGF-D. Kitadai et al (2001) analysed the relationship between the expression of VEGF-C and clinicopathological characteristics in oesophageal squamous cell carcinoma. In vitro analysis demonstrated that four of the five oesophageal carcinoma cell lines studied expressed VEGF-C mRNA. Ex vivo analysis confirmed VEGF-C mRNA to be present in eight of the 12 oesophageal squamous carcinomas. In a further 48 archival specimens, 39.6% showed positive immunohistochemical staining for VEGF-C, which correlated with stage of disease, lymphatic invasion, venous invasion and lymph node metastasis (Po0.01) and depth of tumour invasion (Tumour in situ (Tis) vs T1, Po0.05; Tis vs T2, T3, Po0.01). Interestingly, the number of blood vessels detected by immunohistochemical staining for CD34 was significantly higher in the VEGF-C-positive tumours than the VEGF-C-negative tumours (Kitadai et al, 2001), suggesting that VEGF-C may be involved in both angiogenic and lymphangiogenic processes in tumours. However, a similar study examined larger numbers of oesophageal squamous carcinomas for immunohistochemical expression of VEGF-C protein, but did not report a significant association between the expression of the cytokine and any clinicopathological factor other than histological grade (Noguchi et al, 2002) (Table 2). Vascular endothelial growth factor-C expression is associated with neoplastic progression in the oesophageal mucosa. Using immunohistochemical detection, normal oesophageal mucosa does not express VEGF-C although there is an increase in expression in Barrett's epithelium as it progresses through dysplasia to adenocarcinoma, and this is paralleled by a similar increase in VEGFR3 expression on lymphatic vessels (Auvinen et al, 2002).
Colorectal cancer
Colorectal cancer is similar to oesophageal cancer, in that the role of VEGF-C is less well understood than in gastric carcinoma. Conflict also exists as to the role of VEGF-D. Recent publications illustrate conflicting results regarding protein and gene expression in relation to clinicopathological measures (Table 2).
With respect to VEGF-C expression, several authors have demonstrated associations between growth factor expression and poor clinicopathological outcome (Akagi et al, 2000;Furodoi et al, 2002). Immunohistochemical detection of VEGF-C expression at the deepest invasive site of colorectal carcinoma was found in 47% of 152 advanced tumours. Expression correlated with lymphatic and venous invasion, lymph node status, Dukes' stage, liver metastasis, depth of invasion, poorer histological grade and microvessel density (Furodoi et al, 2002). Vascular endothelial growth factor-C expression and lymph node metastasis were independent prognostic factors for 5-year survival on multivariate analysis (odds ratio (OR) 9.10, P ¼ 0.0272 and OR 8.52, P ¼ 0.0322, respectively). The study also emphasised the paracrine nature of the interaction between VEGF-C and the tumour microenvironment and the positive relationship between VEGF-C and tumour angiogenesis (Furodoi et al, 2002). Similar associations between tissue VEGF-C expression and clinicopathological factors have been described by Akagi et al (2000) with consistent patterns of VEGF-C expression in involved lymph nodes and primary tumours, although in this study only a nonsignificant trend towards decreased survival was identified in VEGF-C positive groups.
Contradictory evidence exists concerning the role of VEGF-C in lymphatic metastasis in colorectal cancer. Studies examining mRNA levels of various VEGF family members tend to show a lack of association with clinicopathological factors. George et al (2001) showed an increase in VEGF-A and VEGF-C mRNA in carcinomas (P ¼ 0.006 and P ¼ 0.004, respectively) but not in colonic polyps (P ¼ 0.22 and 0.5, respectively). No association was found between the increased level of VEGF-C mRNA and lymph node status, although a positive relationship existed between positive lymph nodes and VEGF-A mRNA expression. Patterns of VEGF-C mRNA expression were similar in the primary tumour and lymphatic metastases. The mRNA findings of the study were confirmed by immunohistochemistry, which showed no correlation between positive staining for VEGF-A, VEGF-C or VEGF-D and lymphatic spread (George et al, 2001). Further analyses of VEGF family mRNA levels in the adenoma -carcinoma sequence showed that of VEGF-A, VEGF-B and VEGF-C, only VEGF-A mRNA levels were consistently raised in invasive malignancy and this became apparent early on in disease progression, as levels were elevated to a similar extent in tumours with and without lymph node metastases or distant spread (Andre et al, 2000).
A few studies have focussed on the role of VEGF-D in colorectal malignancy with conflicting results. Tumour expression, assessed by RT-PCR, of VEGF-D mRNA was less than in normal tissue (George et al, 2001), while White et al (2002) found higher levels of VEGF-D protein expression in cancers detected by immunohistochemistry. The increased VEGF-D protein levels detected were associated with lymph node involvement and reduced overall and disease-free survival (White et al, 2002).
The role of VEGF-D within tumours is not well understood, but it has been suggested that VEGF-D may act competitively as an antagonist to the other VEGF family members. George et al (2002) postulated that a reduction in VEGF-D levels in the adenomacarcinoma sequence allowed the more potent angiogenic cytokines VEGF-A and VEGF-C to bind more readily to the signalling receptors VEGFR2 and VEGFR3. The balance between various members of the VEGF family, their relative levels within a tumour, the extent of proteolytic processing and receptor availability may be important in determining tumour behaviour. The importance of the balance between VEGF-C and VEGF-D is illustrated in lung adenocarcinoma, where a low ratio of VEGF-D:VEGF-C (i.e., low VEGF-D and high VEGF-C) is associated with lymph node metastasis and lymphatic invasion (Niki et al, 2000).
Upregulation of cytoplasmic VEGFR3 protein expression has been demonstrated immunohistochemically in colorectal cancer tissue specimens and increased expression was associated with poorer overall survival (Po0.05) (Witte et al, 2002). This again demonstrates the potent paracrine nature of the interaction between the cytokines and their receptor in the microenvironment of the tumour.
In conclusion, conflicting reports exist for the precise involvement of VEGF-C and VEGF-D in lymphatic invasion, lymph node metastasis and prognosis in colorectal cancer. The importance of appropriate sampling and consistency in methodology of immunohistochemical staining and scoring are fundamental to interpretation and comparison between studies.
CONCLUSIONS
Lymphangiogenesis is an exciting area of research in cancer biology. The growth factors VEGF-C and D are involved in this process and possess angiogenic and lymphangiogenic properties. The expression of lymphangiogenic factors is increased in many human malignancies and this is illustrated with respect to malignancies of the gastrointestinal tract. In gastric adenocarcinoma, lymphatic metastasis and lymphatic invasion are enhanced by increased expression of VEGF-C. The precise role for VEGF-C in colorectal and oesophageal squamous malignancy and VEGF-D in other tumours is not clearly understood, but is clearly important at a paracrine level. Further studies using combinations of new lymphatic markers and functional assays will help clarify the influence of these and other cytokines in the future. However, an essential requirement to allow comparison between studies is the development of consistent experimental methodology. This must include the use of antibodies of defined specificity, consistent immunohistochemical protocols with appropriate use of controls and widespread consensus in scoring techniques. Further understanding of the function and actions of VEGF-C and VEGF-D is required to optimise therapeutic strategies, avoiding unwanted side effects, in the treatment of benign and malignant disease. | 2017-11-08T01:07:11.233Z | 2003-07-29T00:00:00.000 | {
"year": 2003,
"sha1": "db3721265c4a96adb2a2e48f96a5fc6eb316c504",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6601145.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "db3721265c4a96adb2a2e48f96a5fc6eb316c504",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256722812 | pes2o/s2orc | v3-fos-license | Dynamic Diagnostic Tests and Numerical Analysis of the Foundations for Turbine Sets
This paper shows current trends in testing and numerical analysis of dynamic loading in relation to a real frame foundation for a turbogenerator set. The analysis of the machine’s foundations, which are subjected to static and dynamic loads, is a complex problem combining the issues of geotechnics, structural engineering, and vibration theory. The authors present a case study of the assessment of the foundation’s technical condition. The main objective of this study is to perform and compare experimental and numerical dynamic analysis which includes the measurement of the acceleration, speed, and amplitude of the natural vibrations of the foundation during the operational speed of the turbogenerator. In addition, auxiliary material tests were carried out to fully diagnose the foundation and obtain the material properties required for the numerical analysis. They included both destructive and non-destructive of concrete strength, the evaluation of the degree of its carbonation, and the scanning of the reinforcement distribution. The research presented in the paper is intended to facilitate the preparation of appropriate data for the design of the foundation renovation and strengthening.
Introduction
The foundations of turbine sets are, in most cases, reinforced concrete supporting structures. The turbine sets consist of a synchronous generator that produces electricity, and a steam or gas turbine in which the enthalpy of the medium is transformed into mechanical energy of rotary motion [1]. In Poland, turbine sets are the most common energy sets that operate with the combustion of coal dust. The newer structures are gas or gas steam sets with a heat recovery steam generator. In such cases, the structure of the turbo set can be a multi-shaft and multi-body structure supported by metal foundation plates attached to a concrete foundation.
The shape of the foundation structure is mainly the result of the technical solution of the turbine set [2]. Foundations for turbine sets can be divided into two main types, depending on the shape of the structure: block foundations and frame foundations.
The block foundations are made up of a single-body reinforced concrete structure with numerous cutouts, which is placed on the ground or on piles. The basic feature of block foundations is their high stiffness, which allows them to be classified as non-deformable structures that are placed on elastic subsoil.
A variation of block foundations are: • open box foundations, • or closed box foundations.
The second type are frame foundations that consist of an upper plate or upper grate, columns or walls, and a raft plate set on the ground or on piles. The base raft plate, with a thickness selected to ensure the adequate stiffness and non-deformation of the entire foundation structure, is also intended to create conditions for the full restraint of the columns in the frame part. In frame foundations, vibrations are the result of the elasticity of the structure itself and, moreover, of the ground elasticity on which the bottom raft plate is placed.
Frame foundations are the basic type of supporting structure for high-speed machines that give lower inertia forces than reciprocating machines.
The main source of the dynamic forces in a turbine generator set are interactions that are related to the occurrence of vibrations with the frequency of the first and second harmonics and also those caused by its normal operation.
The loads from the turbine set to the foundation structure are transferred through the foundation bolts that fasten its individual parts. These bolts are screwed onto the recessed parts to the top plate of the foundation.
Foundations for turbine sets work mainly under dynamic load, and therefore highquality materials with an appropriate strength and durability should be used for their construction. Due to the nature of the load and the need to maintain a rigid and homogeneous concrete structure with the same material properties, it is very important to ensure the homogeneity of the concrete in the entire foundation [3].
There are many scientific problems concerning the dynamics of foundations for turbine sets, which not only concern the structure itself, its geometric shape, material parameters, and boundary and initial conditions, but also those issues related to the identification of the place, value, and precise characteristics of the dynamic forces that originate from the rotating parts of the turbine set acting on the foundation structure [4,5]. The dynamic properties that characterise machines are the frequencies and forms of natural vibrations. In each rotor bearing (in the turbine and generator), the harmonic forces in the direction perpendicular to the longitudinal axis of rotation can be defined as: where: m i -the proportional part of the rotating mass supported by the i-th bearing, e-the eccentricity of the mass, ω = 2π f -the cyclic frequency of the turbogenerator's operation, f -the operating frequency (f 0 = 50 Hz at the nominal speed of the turbine).
The harmonic analysis of the foundation is performed for frequencies in the range of 0.8 f 0 ≤ f ≤ 1.2 f 0 between 40 and 60 Hz [6].
Diagnostics of the structure of foundations for a turbogenerator ( Figure 1) is one of the most complicated [7][8][9]. It includes tests, calculations, and analyses that take into account the actual state of the structure and the occurring loads and operational factors, and which also require the adoption of an adequate model that shows the behaviour of the structure [10,11]. Therefore, structural diagnostics often require the use of the most modern research methods, as well as complex variants of analytical methods that use elaborate computer simulations [12][13][14][15][16]. The main objective of this study is to perform and compare experimental and numerical dynamic analysis which includes the measurement of the acceleration, speed, and amplitude of the natural vibrations of the foundation during the operational speed of the turbogenerator. In addition, verification of resonance conditions and weather vibration magnitude must satisfy code limits. Results of the experimental tests in situ enable one to design foundation structure strengthening and update structural models for numerical analysis. The research presented in the paper is intended to facilitate the preparation of appropriate data for the design of the renovation and strengthening of the foundation. Materials 2023, 16, x FOR PEER REVIEW 3 of 22
Materials and Methods
The authors analysed the existing turbogenerator foundation for a combined heat and power plant. The following experimental dynamic tests were used to verify the resonance conditions and magnitude of vibrations: measurement of the acceleration, speed, and natural vibration amplitude of the foundation during the normal operation of the turbogenerator.
In addition, the following material tests were carried out to fully diagnose the foundation and obtain the material properties required for the numerical analysis: destructive tests of cores cut from the structure in order to assess the compressive strength of concrete; -investigations concerning the homogeneity of strength characteristics, and the estimation of the concrete's strength grade on the basis of sclerometer tests [17,18]; -measurements of the extent and intensity of the carbonation process of the subsurface concrete layer conducted on the cut cores using the rainbow test [19,20]; -localisation of the concrete's reinforcement and the determination of its arrangement, its diameters, and the thickness of the concrete's cover using a non-destructive electromagnetic method [21], followed by the comparison of the obtained results with archival documentation. Figure 2 shows a general flow chart of the analysis performed.
Materials and Methods
The authors analysed the existing turbogenerator foundation for a combined heat and power plant. The following experimental dynamic tests were used to verify the resonance conditions and magnitude of vibrations: measurement of the acceleration, speed, and natural vibration amplitude of the foundation during the normal operation of the turbogenerator.
In addition, the following material tests were carried out to fully diagnose the foundation and obtain the material properties required for the numerical analysis: destructive tests of cores cut from the structure in order to assess the compressive strength of concrete; -investigations concerning the homogeneity of strength characteristics, and the estimation of the concrete's strength grade on the basis of sclerometer tests [17,18]; -measurements of the extent and intensity of the carbonation process of the subsurface concrete layer conducted on the cut cores using the rainbow test [19,20]; -localisation of the concrete's reinforcement and the determination of its arrangement, its diameters, and the thickness of the concrete's cover using a non-destructive electromagnetic method [21], followed by the comparison of the obtained results with archival documentation. Figure 2 shows a general flow chart of the analysis performed.
The experimental tests were carried out with direct access to the structure and the use of test equipment that met all the necessary technical requirements for this type of measurement. After the experimental analysis of the foundation, the next step of research was numerical modelling with parametric study. It included its structure, which was considered as a reinforced concrete frame system founded on the ground through the raft plate. Dynamic numerical analysis was carried out using AxisVM X6 software. The calculation of the natural frequencies and the appropriate mode shapes is the basis for determining the dynamic parameters of the foundation. The numerical determination of the eigenfrequencies and the form of vibrations of the foundation TG is quite difficult in this The experimental tests were carried out with direct access to the structure and the use of test equipment that met all the necessary technical requirements for this type of measurement. After the experimental analysis of the foundation, the next step of research was numerical modelling with parametric study. It included its structure, which was considered as a reinforced concrete frame system founded on the ground through the raft plate. Dynamic numerical analysis was carried out using AxisVM X6 software. The calculation of the natural frequencies and the appropriate mode shapes is the basis for determining the dynamic parameters of the foundation. The numerical determination of the eigenfrequencies and the form of vibrations of the foundation TG is quite difficult in this case and, therefore, was verified by experimental measurements. Results of experimental analysis were utilised for creating a numerical model.
Dynamic Actions-A Literature Review
Dynamic actions are additional loads on the foundation that must be taken into account in strength calculations. In the dynamic analysis of the foundation, the natural frequencies are determined (there should be a 20% difference compared to the operating frequency of the machine [22]) to check the possibility of the occurrence of resonance. It is also checked if the vibration amplitude resulting from the operation of the turbogenerator is within the acceptable standard limits. In addition, the calculations include the determination of stresses in various structural elements of the foundation (columns, beams, top plate, and base raft plate in the case of a frame foundation) and the checking of their loadbearing capacity.
The scientific literature contains many research results that concern a wide spectrum of machine foundation analysis. The main purpose of the research is to identify the dynamic parameters that are extremely important in the diagnosis of this type of structure. Among the various analyses available in the literature, the following can be distinguished: research on the interaction between the foundation and the soil with the determination of the displacements and internal forces of the foundation using three-dimensional viscoelastic boundary elements for the model of the upper plate of the frame foundation [23]; -estimating the foundation parameters in the event of failure, using the inherent unbalance of the rotor and developments in modelling for improved balancing [24][25][26]; -simulation analysis for asynchronous operation capacity of the turbogenerator under excitation loss [27]; -study of the superposition of vibrations and analysis of ground sensitivity [16];
Dynamic Actions-A Literature Review
Dynamic actions are additional loads on the foundation that must be taken into account in strength calculations. In the dynamic analysis of the foundation, the natural frequencies are determined (there should be a 20% difference compared to the operating frequency of the machine [22]) to check the possibility of the occurrence of resonance. It is also checked if the vibration amplitude resulting from the operation of the turbogenerator is within the acceptable standard limits. In addition, the calculations include the determination of stresses in various structural elements of the foundation (columns, beams, top plate, and base raft plate in the case of a frame foundation) and the checking of their loadbearing capacity.
The scientific literature contains many research results that concern a wide spectrum of machine foundation analysis. The main purpose of the research is to identify the dynamic parameters that are extremely important in the diagnosis of this type of structure. Among the various analyses available in the literature, the following can be distinguished: -research on the interaction between the foundation and the soil with the determination of the displacements and internal forces of the foundation using threedimensional viscoelastic boundary elements for the model of the upper plate of the frame foundation [23]; -estimating the foundation parameters in the event of failure, using the inherent unbalance of the rotor and developments in modelling for improved balancing [24][25][26]; -simulation analysis for asynchronous operation capacity of the turbogenerator under excitation loss [27]; -study of the superposition of vibrations and analysis of ground sensitivity [16]; -numerical analysis with the use of FEM programs (ANSYS, SAP, STAAD) in order to carry out a modal analysis to determine the frequency and amplitude of the vibrations of the foundation [28][29][30]; -studies of the effects of seismic interactions and the structural configuration for the natural vibrational frequencies of the structures and seismic resistance estimation of existing turbogenerator foundation by a non-linear static method [31,32]; -field tests of frame foundations in terms of settlement and resistance to temperature load [33]; -an investigation of the influence of the supporting structure on the dynamics of the rotor system [34] an investigation of stiffness, damping value, natural frequencies, and vibration mode shapes by modelling the soil-foundation system using the FEM [35][36][37][38][39]; -tests of the damping coefficient conducted on the basis of the analysis of the measuring signal using the wavelet transform [40,41]; -analysis of multi-criteria optimisation with regards to the foundations for the turbogenerators [42,43]; -dynamic analysis of a thin and narrow turbogenerator foundation on piles with differentiation of frequency, shear wave velocity, and mode shape [44] and determination procedure of load-bearing capacity [45]; -the estimation of multiple fault parameters of a fully assembled turbogenerator system based on the least squares technique requires forced response information [46].
Case Study-A Foundation for a Turbogenerator in a Combined Heat and Power Plant
This paper analyses an existing foundation for a turbogenerator directly coupled to a steam turbine with a rotational speed of 3000 rpm. The turbine is an axial single body and has four vents and two steam outlets for heaters. Steam from the first vent is taken for technological and heating purposes, and from the remaining three, it is directed to low-pressure regenerative exchangers. The turbine set is supported by three bearings, one of which is the load bearing.
The analysed foundation is a reinforced concrete frame structure that consists of a raft plate, columns, and a top plate, as shown in Figure 3. The raft plate of the foundation is set directly on the ground. The part of the reinforced concrete frame of the foundation is supported on this plate. The top plate is supported by three pairs of columns. The basic thickness of the top plate in the generator section is 2.50 m. All columns have cross-sectional dimensions equal to 1000 mm × 1000 mm. To diagnose the foundation (Figure 4), dynamic and material and numerical analyses using the results of the experimental tests were carried out. To diagnose the foundation (Figure 4), dynamic and material and numerical analyses using the results of the experimental tests were carried out.
Dynamic Experimental Test
Dynamic tests were performed with impulse forced vibration of the foundation structure to determine the basic foundation parameters.
Measurement of the Amplitude of Foundation Forced Vibrations
At the level of the floor slab, the measurements of vibrations at the operating speed of the turbine set were conducted in eight points of the foundation's top plate using a piezoelectric accelerometer ( Figure 5). As a result of the research, the following root mean square (RMS) values of the acceleration of vibrations, the velocity of vibrations, and the displacement of the foundation are presented in Table 1. Figure 6 illustrates the frequencies of forced vibrations determined on the basis of the measurements. The measured values of the average displacement amplitudes are between 1 and 5 µm, which is much lower than the permissible amplitude for the foundations of turbogenerators.
Dynamic Experimental Test
Dynamic tests were performed with impulse forced vibration of the foundation structure to determine the basic foundation parameters.
Measurement of the Amplitude of Foundation Forced Vibrations
At the level of the floor slab, the measurements of vibrations at the operating speed of the turbine set were conducted in eight points of the foundation's top plate using a piezoelectric accelerometer ( Figure 5). As a result of the research, the following root mean square (RMS) values of the acceleration of vibrations, the velocity of vibrations, and the displacement of the foundation are presented in Table 1. Figure 6 illustrates the frequencies of forced vibrations determined on the basis of the measurements. The measured values of the average displacement amplitudes are between 1 and 5 μm, which is much lower than the permissible amplitude for the foundations of turbogenerators.
Assessment of the Compressive Strength of Concrete
The compressive strength of concrete was determined on the basis of core samples ϕ × h = 100 × 100 mm that were cut from the elements of the tested structure, as shown in Figure 7. The measurement site was previously scanned with a reinforcement detector to avoid cutting the rebars when drilling the core samples. The test was carried out with a BOSCH drilling set in accordance with the procedures specified in standards EN 12504:1:2009 and EN 12390-3:2009. The sampled drillings were compressed in a testing machine. The strength grade of the concrete was determined on the basis of the results of the destructive tests of the core samples (Table 2). It was determined in relation to standard EN 13791:2019-12.
Assessment of the Compressive Strength of Concrete
The compressive strength of concrete was determined on the basis of core samples φ × h = 100 × 100 mm that were cut from the elements of the tested structure, as shown in Figure 7. The measurement site was previously scanned with a reinforcement detector to avoid cutting the rebars when drilling the core samples. The test was carried out with a BOSCH drilling set in accordance with the procedures specified in standards EN
Assessment of the Compressive Strength of Concrete
The compressive strength of concrete was determined on the basis of core samples ϕ × h = 100 × 100 mm that were cut from the elements of the tested structure, as shown in Figure 7. The measurement site was previously scanned with a reinforcement detector to avoid cutting the rebars when drilling the core samples. The test was carried out with a BOSCH drilling set in accordance with the procedures specified in standards EN 12504:1:2009 and EN 12390-3:2009. The sampled drillings were compressed in a testing machine. The strength grade of the concrete was determined on the basis of the results of the destructive tests of the core samples (Table 2). It was determined in relation to standard EN 13791:2019-12. The sampled drillings were compressed in a testing machine. The strength grade of the concrete was determined on the basis of the results of the destructive tests of the core samples (Table 2). It was determined in relation to standard EN 13791:2019-12. According to the standard EN 13791:2019-12, it was assumed that the characteristic strength of concrete in the tested elements is the lower of the following two values: where: f ck,is,cube -characteristic compressive strength of the concrete in the structure, which corresponds to the strength of the concrete determined on cubic samples with a side length of 150 mm; f m(n),is -the average value of the concrete's compressive strength in the structure obtained from n measurement results; f is,lowest -the lowest of the determined values of the compressive strength of the concrete in the structure; k n -coefficient that depends on the number of samples n = 7, k = 2.
After the calculation, the following was obtained: Based on the results of the tests obtained, it can be assumed that the value of the characteristic strength of the concrete tested in the foundation structure is not higher than 26.38 MPa and, according to the EN 13791:2019-12 standard, it is assumed that its strength grade corresponds to the archival documentation.
Sclerometer Test of Concrete
Concrete homogeneity tests were carried out using a Schmidt "N" sclerometer according to the procedures specified in the standard PN-EN 12504-2:2021-12. Measurement locations were assumed in the following foundation elements: a horizontal longitudinal beam, an outer column, and a middle column. The results of the sclerometer measurements presented in Tables 3 and 4 were correlated at the drilling sites. Sclerometer measurements were used to assess the quality of the concrete on the basis of the homogeneity of its strength properties. The following equation was adopted as the hypothetical regression equation: R av = L av ·(0.0356 * L av * n L 100 2 + 1 − 0.795 + 6.4 L av = 51.00 MPa (9) s R = L av · n L 100 · 0.00254·L 2 av ·( n L 100 2 + 2) − 0.1134·L av + 0.633 = 5.45 MPa (10) R min = R r − 1.64·s R = 42.06 MPa (12) Correction coefficients that depend on the concrete age α = 0.6 and the dry air state β = 1 were adopted in the analysis.
The following equation was adopted as the hypothetical regression equation: R av = L av ·(0.0356 * L av * n L 100 2 + 1 − 0.795 + 6.4 L av = 61.73 MPa (17) s R = L av · n L 100 · 0.00254·L 2 av ·( n L 100 2 + 2) − 0.1134·L av + 0.633 = 2.62 MPa (18) The analysis adopted correction coefficients that depend on the concrete age α = 0.6 and the dry air state β = 1. Finally, the guaranteed concrete strength is: Based on the sclerometer tests, the concrete strength grade in the horizontal beams was estimated as C20/25 and in the columns as C25/C30. Finally, concrete was adopted for the entire structure according to the C20/C25 class. It is one class weaker than was assumed in the archival project.
Measurement of the Intensity of the Carbonation Process of the Subsurface Concrete Layer
Under the influence of the carbon dioxide contained in the atmosphere (CO 2 ), and the moisture in the pores of the concrete, the subsurface concrete layer undergoes a gradual process of carbonation. The carbonation front gradually moves deeper into the concrete, with the main reaction in this process being the reaction of carbon dioxide with calcium hydroxide. As a result of this reaction, calcium carbonate (CaCO 3 ) is formed. This lowers the reaction of the concrete, which in turn leads to a gradual loss of the protective properties of the concrete against steel. The pH of fresh concrete is 11.8-12.6. It is assumed that a decrease in the concrete's pH to about 10.0-11.8 causes the loss of stability of the protective passive layer on the steel. Within the performed research, the scope and intensity of the carbonation process were assessed using the rainbow test. This test allows the pH distribution profile to be determined within the range of 5.0-13.0 (with gradation every two pH degrees). Measurement involved spraying the surface of the fresh fracture of the tested element with the indicator and then determining the pH distribution based on the colour table, as shown in Figure 8. The tests were carried out according to the procedures specified in the standard PN-EN 12390-12:2020-06.
The analysis adopted correction coefficients that depend on the concrete age α = 0.6 and the dry air state β = 1. Finally, the guaranteed concrete strength is: Based on the sclerometer tests, the concrete strength grade in the horizontal beams was estimated as C20/25 and in the columns as C25/C30. Finally, concrete was adopted for the entire structure according to the C20/C25 class. It is one class weaker than was assumed in the archival project.
Measurement of the Intensity of the Carbonation Process of the Subsurface Concrete Layer
Under the influence of the carbon dioxide contained in the atmosphere (CO , and the moisture in the pores of the concrete, the subsurface concrete layer undergoes a gradual process of carbonation. The carbonation front gradually moves deeper into the concrete, with the main reaction in this process being the reaction of carbon dioxide with calcium hydroxide. As a result of this reaction, calcium carbonate CaC0 is formed. This lowers the reaction of the concrete, which in turn leads to a gradual loss of the protective properties of the concrete against steel. The pH of fresh concrete is 11.8-12.6. It is assumed that a decrease in the concrete's pH to about 10.0-11.8 causes the loss of stability of the protective passive layer on the steel. Within the performed research, the scope and intensity of the carbonation process were assessed using the rainbow test. This test allows the pH distribution profile to be determined within the range of 5.0-13.0 (with gradation every two pH degrees). Measurement involved spraying the surface of the fresh fracture of the tested element with the indicator and then determining the pH distribution based on the colour table, as shown in Figure 8. The tests were carried out according to the procedures specified in the standard PN-EN 12390-12:2020-06. Measurement of the carbonation intensity was carried out on four drilling core samples from the foundation columns and beams, as shown in Figure 9. No carbonation was found in these samples. The main reinforcement bars have a cover of 2.5-3.0 cm. No corrosion was found in the uncovering of the main reinforcement (made of Ø32 bars) in either the beams or columns. Measurement of the carbonation intensity was carried out on four drilling core samples from the foundation columns and beams, as shown in Figure 9. No carbonation was found in these samples. The main reinforcement bars have a cover of 2.5-3.0 cm. No corrosion was found in the uncovering of the main reinforcement (made of Ø32 bars) in either the beams or columns.
Investigation of the Thickness of the Concrete's Cover and the Location and Diameter of the Reinforcement
Measurements were carried out in a non-destructive manner with the use of specialised Hilti Ferroscan instruments that are intended for locating the reinforcing bars and for measuring the thickness of the concrete's cover [47]. The measurements used the electromagnetic method of excitation of currents in the reinforcement. The instruments automatically calculate the cover thickness as the smallest distance between the bar side and the concrete surface for a given bar diameter. The electromagnetic method is a research method that uses the phenomenon of induction of a current in an electric circuit that is caused by the electromagnetic field of the circuit being disturbed. Testings of reinforced concrete structures with the use of the electromagnetic method involve successive scanning of concrete surfaces with a measuring probe to locate the reinforcing bars, followed by the determination of their diameter and thickness. Before measuring the thickness of the cover, the diameters of the reference bars, determined on the basis of the technical documentation or during a micro-uncovering, are entered into the device. The tests were carried out at six foundation measurement sites (S26-S31) on an area of 60 cm × 60 cm, as shown in Figures 10 and 11.
Investigation of the Thickness of the Concrete's Cover and the Location and Diameter of the Reinforcement
Measurements were carried out in a non-destructive manner with the use of specialised Hilti Ferroscan instruments that are intended for locating the reinforcing bars and for measuring the thickness of the concrete's cover [47]. The measurements used the electromagnetic method of excitation of currents in the reinforcement. The instruments automatically calculate the cover thickness as the smallest distance between the bar side and the concrete surface for a given bar diameter. The electromagnetic method is a research method that uses the phenomenon of induction of a current in an electric circuit that is caused by the electromagnetic field of the circuit being disturbed. Testings of reinforced concrete structures with the use of the electromagnetic method involve successive scanning of concrete surfaces with a measuring probe to locate the reinforcing bars, followed by the determination of their diameter and thickness. Before measuring the thickness of the cover, the diameters of the reference bars, determined on the basis of the technical documentation or during a micro-uncovering, are entered into the device. The tests were carried out at six foundation measurement sites (S26-S31) on an area of 60 cm × 60 cm, as shown in Figures 10 and 11. The scanning of the reinforcing bars confirmed that their actual distribution is similar to that presented in the technical documentation. On the basis of the performed uncoverings, it was found, in the case of the columns and beams, that the main reinforcement is consistent with the archival documentation, has a diameter of 32, is made of 18G2 steel, and that the stirrup spacing in the beams is equal to 20 cm. Figure 10. The image of reinforcement scanning at the S26 and S27 measurement sites of the foundation body); red means bars of the main reinforcement; green means cros Figure 11. The image of reinforcement scanning at the S31 measurement site (colum the foundation body) superimposed on the tested element.
The scanning of the reinforcing bars confirmed that their actual distribut to that presented in the technical documentation. On the basis of the perform ings, it was found, in the case of the columns and beams, that the main rein consistent with the archival documentation, has a diameter of 32, is made o and that the stirrup spacing in the beams is equal to 20 cm.
Numerical Analysis of the Foundation
On the basis of the experimental tests and the archival technical docu numerical analysis was performed using the FEM, as shown in Figure 12. involved the verification of the conditions of the ultimate and serviceability li
Numerical Analysis of the Foundation
On the basis of the experimental tests and the archival technical documentation, a numerical analysis was performed using the FEM, as shown in Figure 12. The analysis involved the verification of the conditions of the ultimate and serviceability limit states of the foundation, and the performing of tests of forced vibrations. Dynamic numerical analysis was carried out using AxisVM software. The permanent load was defined as the self-weight of the foundation's structure and the weight of the turbine set. In the numerical model, it was modelled as a concentrated mass that is connected, using rigid elements, to the turbine set's fastening points in the foundation, as can be seen in Figure 13. The analysis assumed an elastic ground, the characteristics of which were determined on the basis of archival research.
The self-weight was determined on the basis of the foundation's dimensions, while the weight of the turbine set was assumed according to the technical documentation. For a generator with a stator, the weight is 169.5 t, and for a turbine, it is 66.7 t. the foundation, and the performing of tests of forced vibrations. Dynamic numerical analysis was carried out using AxisVM software. The permanent load was defined as the selfweight of the foundation's structure and the weight of the turbine set. In the numerical model, it was modelled as a concentrated mass that is connected, using rigid elements, to the turbine set's fastening points in the foundation, as can be seen in Figure 13. The analysis assumed an elastic ground, the characteristics of which were determined on the basis of archival research. the foundation, and the performing of tests of forced vibrations. Dynamic numerical analysis was carried out using AxisVM software. The permanent load was defined as the selfweight of the foundation's structure and the weight of the turbine set. In the numerical model, it was modelled as a concentrated mass that is connected, using rigid elements, to the turbine set's fastening points in the foundation, as can be seen in Figure 13. The analysis assumed an elastic ground, the characteristics of which were determined on the basis of archival research. In the dynamic calculations, the load from misalignment of the rotating parts of the turbogenerator, i.e., the stator rotor and the turbine rotor, was taken into account. This load is assumed based on the equation: On the basis of the technical documentation, it was assumed that the foundation raft plate is set on subsoil that consists of gravel mix with the degree of compaction I D = 0.5. Therefore, the following elasticity coefficients of the subsoil were adopted for foundations with an area greater than 50 m 2 : The calculated natural frequencies of the foundation do not indicate the presence of a resonance state with any mode shape, as shown in Figure 14 (Figure 15). This is much lower than the permissible vibration amplitude for turbogenerator foundations, which is 20 µm [48]. The calculated theoretical vibration amplitude of the foundation corresponds well with the actual measured vibration amplitude of the foundation during normal operation of the turbogenerator, which is equal to 3 µm (the measured vibration amplitude was classified as not degrading the foundation).
Analysis of the Load-Bearing Capacity of the Foundation
The EN 1991-3 standard was used to calculate the dynamic forces caused by rotation. The interaction effect which results from the excitation of the machine with the rotating masses and the dynamic behaviour of the structure can be expressed by an equivalent static force defined as: where: where: ζ-the damping coefficient, n s = 50 Hz-frequency of the exciting force.
According to [24], for turbogenerators on RC frame foundations, the damping coefficient is defined as: where: ∆-the logarithmic damping decrement of the foundation, which is equal to approx. 0.4 for RC frame foundations. After inserting n e = 44.01 Hz, ϕ M1 = 3.1.
According to [24], the computational value of the forces, which replaces the impact of dynamic loads on the foundation, is obtained from the following formula: where: ϕ M -dynamic coefficient (as above), µ-fatigue factor equal to 2, γ-calculation factor equal to 5.
The centrifugal force of a rotating part F s = 111.3 kN.
For the purposes of the calculations, this force was divided into the force from the stator rotor and the turbine rotor: The design value is as follows: F s,stator,eq = 4.2 · 2 · 5 · 71 = 2982 kN, F s,turbine,eq = 4.2 · 2 · 5 · 40.3 = 1693 kN.
The forces determined in this way were loaded on the turbogenerator's foundation. Additionally, the exceptional moment derived from the start up and stop run up loads was considered in the calculations. According to the EN 1991-3 standard, for this moment, the equivalent static moment can be calculated in the following way: where: M k,max -the peak value of the moment derived from the start up and stop run up loads according to the archival documentation: where: For such loads, the required main reinforcement was calculated, as can be seen in Figures 16 and 17. The obtained values of the reinforcement area do not exceed the value of the reinforcement applied in the foundation, that is, min. 5 × φ 32 per 1 m = 4020 mm 2 /m. The execution of the numerical calculations of the natural frequencies and mode shapes of the foundation construction was possible thanks to the research and measurements conducted. The theoretical vibration amplitude of the foundation is greater than that measured during normal operation of the turbogenerator; however, both are lower than the permissible value. The calculated values of the stresses in the concrete and reinforcing bars are lower than the permissible values.
The execution of the numerical calculations of the natural frequencies and mode shap of the foundation construction was possible thanks to the research and measuremen conducted. The theoretical vibration amplitude of the foundation is greater than th measured during normal operation of the turbogenerator; however, both are lower th the permissible value. The calculated values of the stresses in the concrete and reinforci bars are lower than the permissible values. The execution of the numerical calculations of the natural frequencies and mode shap of the foundation construction was possible thanks to the research and measuremen conducted. The theoretical vibration amplitude of the foundation is greater than th measured during normal operation of the turbogenerator; however, both are lower th the permissible value. The calculated values of the stresses in the concrete and reinforci bars are lower than the permissible values.
Discussion
The performed dynamic analyses show the great possibilities of using the FEM in the diagnostic process of foundation structures for turbogenerators. Due to the enormous cost of such objects, the use of FEM allows one to accurately verify the entire structure not only for the dynamic behaviour of the object but also for the structural strength and resistance to earthquake-type excitations. The complexity of such an object requires a very accurate reconstruction of the whole structure, together with taking into account the correct material parameters and dynamic characteristics. Therefore, the creation of a proper numerical model required conducting experimental dynamic and material tests of the foundation. The results of the experimental tests, especially in situ, enable the design of the existing foundation structure strengthening for future performance and update the initial foundation structural model for final numerical analysis. It can be stated that numerical analysis allows for better recognition of foundation properties, dynamic damping characteristics, fatigue, rheological changes, corrosion, and the degree of efficiency due to exploitation. The research presented in this paper was based on a limited amount of data. This problem, which affects the effectiveness of numerical analysis, can be solved in the future by using artificial neural networks. The ANN represents an artificial system based on mathematical models similar to biological nervous systems and is capable of intelligently processing simulated information. Currently, ANNs are used for the diagnostics and monitoring of shafts in turbines [49]. The benefits of the use of neural networks can exceed many times the work required for diagnostics of the foundation for turbogenerators implemented to date.
Conclusions
The turbine set has a value that is on average 20 times higher than the cost of its foundation. The foundation, together with the turbine set and the adjacent devices, must ensure safe use in continuous operation conditions, where the turbine set shaft rotates at 3000/3600 rpm (which corresponds to a frequency of 50/60 Hz). During operation, the dynamic condition of the turbine set usually deteriorates as a result of the ongoing wear and tear processes. Changes in harmonic values in the analysis of the vibrations' spectrum (e.g., fast Fourier transformation) affect the harmonically variable load in both the horizontal and vertical directions. The main design goal for a machine's foundation is to limit its movement to amplitudes that do not endanger the proper operation of the machine. In the case of high-speed machines, it is desirable to design the foundation to be low tuned, with the value of the vertical natural frequency below the operating speed of the machine.
The article presents the dynamic analysis and diagnostics of reinforced concrete foundations for machines, which are very important in terms of ensuring the proper safety, reliability, and durability of these very expensive machines. The design of a frame foundation for a turbogenerator is the most difficult task when compared to designing any other foundation. There are many parameters that influence the foundation's response. The rigidity of the frame structure plays a key role. The individual vibration characteristics of individual elements, such as columns and beams, are very important in determining the behaviour of the foundation [50].
A real foundation of a turbogenerator in a CHP plant is presented as the case study. The dynamic analytical and experimental analysis method presented in the paper turned out to be a good tool to verify the foundation structure, making it possible to perform a sensitivity analysis of the impact of changes in various parameters. A numerical analysis (AxisVM software) was carried out using experimental data in which the bearing capacity of the foundation was determined and the natural frequencies and maximum amplitude were checked. In the analysis process, the auxiliary material tests were also very important. The experimental material tests performed were related to the strength of the concrete and the identification of reinforcement. The numerical analysis was positively verified using experimental tests. Comparison of analytical and experimental results allows optimising the calculation model of the foundation structure, as well as determining the dynamic parameters of the existing foundation structure. It also enables the behaviour of the foundation after reconstruction and strengthening of its structure, as well as the damage or remaining service time to be determined. A key factor in the successful design of the foundation of a turbogenerator is a precise engineering analysis of the foundation response to dynamic loads caused by the machine operation.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-10T16:18:38.435Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "f258981c36fd15ed35c64204ff5fa14ce842a244",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/4/1421/pdf?version=1675849652",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72255d97f7a0e2bc2889464d99be4cd49c64bbb4",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234926464 | pes2o/s2orc | v3-fos-license | Unintentional Fusion in Preserved Facet Joints without Bone Grafting after Percutaneous Endoscopic Transforaminal Lumbar Interbody Fusion
Introduction A percutaneous endoscopic transforaminal lumbar interbody fusion (PETLIF) procedure has been previously developed. During postoperative follow-up, in some patients, bone fusion occurred between opened facet joints, despite not having bone grafting in the facet joints. Here, we investigated facet fusion's frequency and tendencies following PETLIF. Methods A retrospective analysis was conducted on a prospectively collected, nonrandomized series of patients. Forty-two patients (6 males and 36 females, average age: 69.9 years) who underwent single-level PETLIF at our hospital from February 2016 to March 2019 were included in this study. Patients were assessed with lumbar X-ray images and computed tomography (CT) prior to, immediately after, and 1 year after surgery. Results Pseudarthrosis was not observed in any patients, and facet fusion was observed in 26 of 42 post-PETLIF patients (61.9%) by CT 1 year postoperatively. The average interfacet distance increased from 1.3 mm preoperatively to 4.5 mm postoperatively, and facet fusion was observed under the opened conditions of 3.8 mm at 1 year. Segmental lordotic angle of the fusion segment in the lumbar X-ray images was significantly larger in the facet fusion subgroup prior to surgery, immediately following surgery, and 1 year after surgery compared to the facet non-fusion group (p=0.02, p<0.01, p=0.01, respectively). There were no significant differences in patient background, correction loss of segmental lordosis, interfacet distance, or clinical score between the facet fusion and facet non-fusion subgroups. Conclusions Facet fusion was achieved over time within the facet joints that were opened through indirect decompression after PETLIF. We hypothesized that the preserved facet joints potentially became the base bed for spontaneous bone fusion due to the preserved facet joint capsule and surrounding soft tissue, which maintained cranio-caudal facet traffic and blood circulation in the facet joints. The complete preservation of the facet joints was a key advantage of minimally invasive lumbar interbody fusion procedures. Level of evidence Level III
Introduction
Spinal instrumentation and fusion are established surgical treatments for degenerative spinal disorders associated with instability such as degenerative lumbar spondylolisthesis, spinal instability, and spinal foraminal stenosis [1][2][3] . Percutane-ous pedicle screw (PPS), lateral lumbar interbody fusion (LLIF), and spinal endoscopic techniques have been developed in recent years, and reports have demonstrated their effectiveness in minimally invasive spinal fusion procedures [4][5][6][7] . An advantage of these techniques is that they are low invasive and avoid direct decompression; additionally, they allow for complete preservation of the facet joints. Facet preservation provides a great advantage over transforaminal lumbar interbody fusion (TLIF) and posterior lumbar interbody fusion (PLIF) that require facetectomy, in the event of pseudarthrosis or postoperative infection after spinal fusion surgery. Furthermore, some reports have indicated that bone fusion occurred between the preserved facet joints. However, details for how this occurred have not been revealed 8) .
Nagahama et al. proposed a percutaneous endoscopic transforaminal lumbar interbody fusion (PETLIF) procedure as a minimally invasive lumbar spinal fusion surgery 9) . The procedure is a full-endoscopic lumbar interbody fusion that involves passing an interbody cage posterolaterally through Kambin's triangle using the original oval devices [9][10][11] . This surgical technique was developed from the full-endoscopic intervertebral disc curettage that was being performed for infectious lumbar spondylodiscitis 12) . Nagahama et al. have reported on the effectiveness of PETLIF in a previous clinical study 9) . In PETLIF, neurological symptom improvements are achieved by performing indirect decompression in degenerative spinal disorder patients with preserved bilateral facet joints 9) . Occasionally, during postoperative follow-up after PETLIF, patients have demonstrated bone fusion between opened facet joints, despite not having bone grafting in the facet joints.
Although the goal of minimally invasive lumbar interbody fusion, such as PETLIF, is to achieve bone fusion in the anterior vertebral elements, of course, the ability to obtain bone fusion between the facet joint and the intervertebral body is a major advantage for spinal fusion surgery. If facetpreserving minimally invasive spinal fusion surgery is more likely to result in facet fusion than conventional open TLIF or PLIF, it may be a better treatment option for spinal fusion surgery. However, to our knowledge, there is no study examining mechanisms and trends in facet joint fusion in PETLIF or similar techniques. Therefore, the present study aimed to retrospectively investigate the frequency and tendencies for facet fusion after PETLIF. This study is important in that it is the first report on the details of facet fusion without bone grafting, which demonstrates the benefits of preserving the facet joint in minimally invasive lumbar fusion surgery.
Patient background
The present study was conducted with approval from the relevant institutional review board. A total of 54 patients underwent single-level PETLIF at our hospital from February 2016 to March 2019. PETLIF procedure was indicated for patients with degenerative lumbar spondylolistheses with accompanying instability, lumbar canal stenosis, and degenerative lumbar scoliosis (e.g., leg pain and/or back pain that was resistant to conservative treatment, such as analgesic administration). PETLIF is a technique in which an interbody cage is inserted through Kambin's triangle 9,11) , as described below, and is applied to either L3/4 or L4/5, where an anatomically safe working space can be assured in terms of the facet bone morphology and exiting nerve roots 9) . The contraindication for PETLIF includes patients with severe slip (Meyerding grade 3 or more). PETLIF can typically be performed on patients with narrow discs (or almost no disc space), and the disc height does not affect the indication for PETLIF. Patients with severe osteoporosis (T-score of −2.5 SD or less with osteoporotic vertebral fracture) who were considered to be at high risk for intraoperative pedicle screw pull-out were excluded from the surgical indication for PET-LIF procedure. Of the 54 patients who underwent PETLIF, 3 patients who required additional surgery were excluded, and 42 patients who underwent computed tomography (CT) 1 year after surgery and in whom facet fusion assessment was possible were included in this study. Nine patients were unintentionally excluded because they had not appeared for follow-up 1 year postoperatively, and their lumbar CT images were not available. The diseases among the 42 patients (6 males and 36 females, average age: 69.9 years) were as follows: degenerative lumbar spondylolisthesis, 39 patients; lumbar canal stenosis, 2 patients; and lumbar degenerative scoliosis, 1 patient. The operative level was L3-4 in 2 patients and L4-5 in 40 patients.
Surgical procedures
The PETLIF surgical procedure was conducted as previously reported 9) . Surgery was performed under general anesthesia with nerve monitoring (NVM5; NuVasive, San Diego, CA). The patient was placed in a prone position on a frame that allowed radioscopy. A PPS (IBIS Spinal System; Japan Medical Dynamic Marketing, Tokyo, Japan) was inserted into the vertebral body to be fixed under fluoroscopic guidance, and spinal rods were inserted to correct the slippage of the vertebral body 9) . The Spine TIP Transforaminal Approach kit (Karl Storz GmbH, Tuttlingen, Germany) was used to approach the intervertebral disc from Kambin's triangle. PETLIF oval dilator and sleeve (Robert Reid, Inc., Tokyo, Japan) were set up within the intervertebral disc, and the interbody distance was expanded 9) . Bone from the iliac crest and/or spinous process was harvested percutaneously and used for grafting 9) . A ring curette and nuclear pulposus forceps were used to excise the intervertebral disc and create a graft-base bed through the oval sleeve, after which grafted bone (autogenous local bone or a mixture of local bone and artificial bone [Primabone; Japan Medical Dynamic Marketing, Tokyo, Japan]) was inserted. The PETLIF half oval dilator and sleeve (Robert Reid Inc.) were inserted into the intervertebral disc to retract the exiting nerve root. Subsequently, an interbody cage of a 9 or 10 mm height (the same size as used in open surgery) was inserted 9) . Finally, screws were tightened to apply the compression load to the interbody cage. In this surgical procedure, the bilateral facets were preserved without exposure.
Assessment
The patient background (age, sex, and drugs for osteoporosis treatment) and image findings were retrospectively analyzed. Image assessments were conducted with lumbar Xray images and lumbar CT prior to surgery, immediately after surgery (1 week after surgery), and 1 year after surgery. Preoperative bone mineral density (BMD) of the hip was measured by dual energy X-ray absorptiometry. Radiographic outcomes were assessed in a blinded fashion by two independent coauthors. The extent of % slip (anterior slip) of the fusion segment was measured on the preoperative standing lateral X-ray image of the whole spine. The segmental lordotic angle of the fusion segment was calculated from an X-ray image of the intermediary position of the lumbar profile taken in decubitus neutral position (Fig. 1A). The maximum interfacet distance at the upper vertebral endplate level of the fused lower vertebral body was calculated from CT axial images (Fig. 1B). The presence or absence of caudal pedicle screw invasion to the facet joint was evaluated with lumbar CT axial images immediately after surgery. A screw invasion of 1 mm or more in the facet joint was judged as the presence of screw invasion. The facet joint was assessed with lumbar CT axial images and sagittal reconstruction images 1 year after surgery. Continuous bone bridging observed between the facet joints was determined as facet fusion (Fig. 2). Interbody bridging bone on CT of the lumbar spine 1 year after surgery was evaluated by comparing it with CT images immediately after surgery, using the fusion criteria reported by Choi et al. 13) . It was considered to be evidence of interbody bridging bone when there was fusion with remodeling and trabeculae or when the graft was intact, without being fully remodeled and incorpo-rated but with no radiolucency present 13) . The presence of interbody cage subsidence (subsidence over 2 mm as compared to immediately after surgery 1) ) and pedicle screw loosening (a lucent zone around the screw 14) ) were assessed in CT multi-planar reconstruction images obtained 1 year after surgery 1) . Fusion criteria based on CT imaging were defined as any evidence of bridging bone in the interbody space and/or bridging of the facet joints 15) . Lumbar pseudarthrosis was defined as the presence of more than 5°of angular motion in flexion-extension radiographs at the fusion level and a loosening of the pedicle screws on CT 1 year postoperatively 1) . As a clinical assessment, the Japanese Orthopedic Association (JOA) score and the Roland-Morris Disability Questionnaire (RDQ) score were assessed preoperatively and 1 year postoperatively.
Statistical analysis
All data are expressed as the mean±standard deviation. Comparative statistical analyses were conducted for each parameter of the patient background and image findings between the facet fusion and facet non-fusion groups. An unpaired Student's t-test was used for analysis of continuous variables, and either a chi-squared test or Fisher's exact test was used for analysis of binomial and categorical variables. Statistical significance was defined as a p-value<0.05.
Results
Bone fusion by CT evaluation was obtained in 37 of 42 patients (88.1%); facet fusion was observed in 26 of 42 patients (61.9%); and interbody bridging bone was observed in 32 patients (76.2%) 1 year after PETLIF surgery. Bilateral facet fusion was observed in 15 patients, and unilateral facet 4%). In all cases, including the five cases in which bony fusion was not confirmed by CT, there was no angular instability of more than 5°in flexion-extension radiographs and no lumbar pseudarthrosis after PETLIF. The mean BMD was 0.80±0.15 g/cm 2 , and the mean T-score was −0.91±1.29 at the left hip. The T-score of the patient with the loosened pedicle screw was −2.14.
We compared the facet fusion and facet non-fusion subgroups. No statistically significant differences were observed in terms of sex, diagnosis, BMD of the total hip, and osteoporosis treatment between the two subgroups (Table 1). Interbody bridging bone was seen in 80.8% and 68.8% in the facet fusion and the facet non-fusion groups, respectively, with the former group tending to have a higher rate of bone bridge formation. However, there were no statistically significant differences between the two groups (p=0.57) (Table 2). Also, no statistically significant differences were observed for cage subsidence and pedicle screw loosening between the two groups (p=0.50 and p=0.33, respectively) (Table 2). Caudal pedicle screw invasion to the facet joint was observed in 3 of 52 facet joints (5.8%) in the facet fusion group and 3 of 32 facet joints (9.4%) in the facet non-fusion group, with no significant difference between the two groups ( Table 2). The mean % slip of the fusion segment was significantly greater in the facet fusion group than that in the facet non-fusion group (20.5% and 15.9%, respectively, p=0.04). Segmental lordotic angle of the fusion segment in the lumbar X-ray images was significantly larger in the facet fusion subgroup prior to surgery, immediately following surgery, and 1 year after surgery compared to the facet non-fusion group (p=0.02, p<0.01, and p=0.01, respectively) ( Table 2). Correction loss of the segmental lordotic angle from immediately after surgery to 1 year after surgery was equivalent in the two subgroups (p=0.42). The average interfacet distance in the facet fusion subgroup increased from 1.3 mm prior to surgery to 4.5 mm after surgery, and facet fusion was observed under the opened conditions of 3.8 mm at 1 year. The interfacet distance was equivalent between the two subgroups prior to surgery, immediately after surgery, and 1 year after surgery (p=0.12, p=0.40, and p= 0.42, respectively) ( Table 2).
In clinical assessment, the mean JOA score improved from 15.3±2.2 preoperatively to 27.1±2.0 1 year after surgery in the facet fusion group and from 13.8±3.9 to 27.4± 1.7 in the non-fusion group. The JOA scores preoperatively and 1 year postoperatively were equivalent between the two groups (p=0.24 and p=0.35, respectively). The RDQ score improved from 10.1±4.8 preoperatively to 2.3±2.0 1 year after surgery in the facet fusion group and from 10.0±4.5 to 2.6±2.7 in the non-fusion group. The RDQ scores preopera-tively and 1 year postoperatively were equivalent between the two groups (p=0.50 and p=0.37, respectively).
Case presentation
As a representative case, we describe a 68-year-old female. Right-entering L4/5 PETLIF was conducted for L4 degenerative spondylolisthesis. The spinal canal and bilateral facet joint openings were observed on CT immediately after surgery ( Fig. 2A, 2B). Bone ingrowth progressed over time in the opened facets, and bilateral facet fusion was achieved 1 year after surgery (Fig. 2C, 2D).
Discussion
In the present study, we investigated the frequency and trend of facet joint fusion without bone grafting in patients undergoing PETLIF, a minimally invasive spinal fusion procedure. This is the first study to evaluate facet joint fusion in detail, and this study demonstrates the possibility of fusion of preserved facet joints without bone grafting after lumbar interbody fusion surgery. That is, the results of the current study demonstrated advantages of preserving the facet joints in minimally invasive spinal interbody fusion surgery.
Using the PETLIF technique, facet fusion was achieved in 61.9% of patients by 1 year after the procedure with preservation of bilateral facet joints. Bone grafts were not conducted for any of the patients' facets, and spontaneous fusion was achieved in facets without direct surgical invasiveness. There have been occasional studies reporting that bone fusion was achieved in the facet joints without surgical invasiveness following lumbar interbody fusion 7,8) . Satake et al. 8) reported that spontaneous facet fusion was achieved in 52 of 81 segments (64%), without bone grafting for the facet joints, 2 years after lumbar fusion surgery using LLIF and pedicle screws to preserve the bilateral facet joints. Kondo et al. 7) used CT to evaluate bone fusion after microendoscopic TLIF with a PPS system and reported that preserved contralateral facet joint fusion was achieved in 34 of 200 patients (17%) and 27 of 88 patients (31%) by an average of 15 and 40 months after surgery, respectively. Researchers have considered that the facet joint potentially becomes the base bed for spontaneous bone fusion 8) . We speculate that this may be as a result of maintaining blood circulation in the facet joints through preservation of the facet capsule and the surrounding soft tissue.
There have been no detailed reports on facet fusion, and it is not clear which types of patients have facet fusion. In this study, the preoperative % slip of the fusion segment in the facet fusion group was significantly higher than that in the facet non-fusion group. The degree of slippage may be associated with the severity of facet osteoarthritis changes, which in turn may have affected the postoperative facet fusion. In addition, the current study demonstrated that the fused segmental lordotic angle in the facet fusion group was significantly larger than that in the non-fusion group prior to surgery, immediately after surgery and 1 year after surgery. A large segmental lordotic angle results in an increase in the facet contact area on the cranio-caudal side, which could be advantageous for facet fusion. The cases who underwent PETLIF with bilateral facet joint preservation tended to have a higher rate of facet fusion than that of the cases that Kondo et al. 7) reported of microendoscopic TLIF with unilateral facet resection. Bilateral facet preservation may be advantageous for facet fusion over unilateral facet preservation 7,8) .
Reports have described bone facet regrowth and unintended facet arthrodesis after lumbar decompression and lumbar dynamic stabilization surgery [16][17][18][19][20][21] . Dohzono et al. 16) evaluated bone regrowth at facet joints 2 years after microendoscopic lumbar decompression surgery and reported a significant correlation between bone regrowth and percentage slippage in lumbar spondylolisthesis. Similarly, Guigui et al. 17) reported that post-operation spinal instability greatly influenced the amount of bone ingrowth at the operation site after lumbar decompression surgery. Kanayama et al. 18,19) reported that facet fusion occurred in 12 of 64 patients (18.8%) and 14 of 43 patients (32.6%) by an average of 59.5 months and 82 months, respectively, after posterior lumbar dynamic stabilization surgery using the Graf artificial ligament. Fay et al. 21) reported that unintended facet fusion occurred in 52.1% of patients 4 years after Dynesys dynamic stabilization. Furthermore, Fay et al. 21) reported that facet fusion was significantly greater in patients with lumbar spondylolisthesis and those over the age of 65 years. These reports suggest that bone regrowth is likely to occur when spinal instability is present. The present study did not demonstrate statistically significant differences for screw loosening, interbody fusion disorders, and correction loss of the segmental lordotic angle between fusion and non-fusion sub-groups. That is, spinal instability was thought to contribute minimally to facet fusion after PETLIF in the current study cases.
Facet fusion was observed over time after PETLIF between facet joints that were widely opened (to an average of 3 mm) immediately after surgery (Fig. 2). A characteristic of PETLIF is that slippage of the vertebral body is forcefully corrected with the PPS and oval retractor to expand the spinal canal 9) . In the PETLIF procedure, the facet joints are opened while interfacet traffic on the cranio-caudal side is maintained by the preserved facet joint capsule, along with the correction of the vertebral slippage. Microfractures and hemorrhaging are observed in the forcefully opened degenerated facet joints. Furthermore, the preservation of the surrounding soft tissue maintains favorable blood flow to the facet joints, which could promote bone ingrowth between the facet joints.
The present study demonstrated the possibility of fusion in facet joints preserved through minimally invasive lumbar fusion procedures. However, facet fusion was at most a secondary aspect. The objective of lumbar interbody fusion is to achieve bone fusion in the anterior vertebral elements, which support most of the imposed load 22) . We have examined not only the facet fusion but also the interbody bone fusion in detail on CT 1 year after PETLIF. Lumbar pseudarthrosis was not observed in any patients; however, interbody bridging bone was observed in 32 patients (76.2%). Compared with those of previous reports that have assessed intervertebral bridging bone formation on CT, the results of this study are comparable to the interbody fusion rate (80.6%) 1 year after open TLIF reported by Nagahama et al. 1) Even with minimally invasive lumbar interbody fusion surgery, the creation of an intervertebral bone graft-base bed and intervertebral bone grafting should never be neglected. Although bone fusion rates should not be reduced simply to minimize invasiveness of spinal fusion procedures, minimally invasive spinal fusion could be an option if the rate of bone fusion is comparable to that of conventional spinal fusion surgery.
A limitation of the present study was that the follow-up observation period was relatively short. Facet fusion may progress further over a longer period 7,8) . A second limitation was that the present study has no control group that underwent lumbar interbody fusion by other surgical techniques. Kondo et al. reported that facet joint fusion was achieved in 17% of patients who underwent microendoscopic TLIF 15 months after surgery 7) . In our case series, facet fusion was not obtained in approximately 90% of cases who underwent open TLIF without bone grafting after resection of the facet joint capsule 1 year after surgery. Although these cases are not comparable to the present study because there were differences in terms of patient background, the fusion level and diseases, it is suggested that preservation of the bilateral facet joints and surrounding tissue may be an advantageous factor in facet fusion after PETLIF. A comparative analysis with other lumbar interbody fusion surgery (e.g., open TLIF, low-invasive fusion procedure including LLIF and minimally invasive surgery TLIF or PLIF) in multicenter studies is required to further analyze underlying facet fusion mechanisms. In addition, it is necessary to examine the difference in the facet fusion rate among different pathologies, such as lumbar degenerative scoliosis and lumbar canal stenosis, compared to that in degenerative spondylolisthesis.
In conclusion, facet fusion was observed in 61.9% of patients by 1 year after PETLIF. Facet fusion was achieved over time within the facet joints that were opened through indirect decompression. We inferred that the underlying mechanism involved progression of bone ingrowth between the degenerative facet joints due to preservation of the facet capsule and the surrounding soft tissue, which maintained cranio-caudal facet traffic and blood circulation in the facet joints. The complete preservation of the facet joints was considered a key advantage of minimally invasive lumbar interbody fusion procedures.
Conflicts of Interest:
The authors declare that there are no relevant conflicts of interest.
Author Contributions: Katsuhisa Yamada and Ken Nagahama analyzed and wrote the manuscript, and all authors participated in the study design. All authors have read, reviewed and approved the article.
Ethical Approval: The present study was approved by the institutional review board of Wajokai Sapporo Hospital (approval code: 2017-1).
Informed Consent: Informed consent was obtained from all participants in this study. | 2021-05-22T00:02:58.019Z | 2021-04-14T00:00:00.000 | {
"year": 2021,
"sha1": "613aff8adf4ef7d50794e70ddd3498906226719c",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/ssrr/advpub/0/advpub_2020-0232/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfac6b02852abfd47037e6af9728795384b7604f",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4670946 | pes2o/s2orc | v3-fos-license | Affinity Capture and Identification of Host Cell Factors Associated with Hepatitis C Virus (+) Strand Subgenomic RNA*
Hepatitis C virus (HCV) infection leading to chronic hepatitis is a major factor in the causation of liver cirrhosis, hepatocellular carcinoma, and liver failure. This process may involve the interplay of various host cell factors, as well as the interaction of these factors with viral RNA and proteins. We report a novel strategy using a sequence-specific biotinylated peptide nucleic acid (PNA)-neamine conjugate targeted to HCV RNA for the in situ capture of subgenomic HCV (+) RNA, along with cellular and viral factors associated with it in MH14 host cells. Using this affinity capture system in conjunction with LC/MS/MS, we have identified 83 cellular factors and three viral proteins (NS5B, NS5A, and NS3–4a protease-helicase) associated with the viral genome. The capture was highly specific. These proteins were not scored with cured MH14 cells devoid of HCV replicons because of the absence of the target sequence in cells for the PNA-neamine probe and also because, unlike oligomeric DNA, cellular proteins have no affinity for PNA. The identified cellular factors belong to different functional groups, including signaling, oncogenic, chaperonin, transcriptional regulators, and RNA helicases as well as DEAD box proteins, ribosomal proteins, translational regulators/factors, and metabolic enzymes, that represent a diverse set of cellular factors associated with the HCV RNA genome. Small interfering RNA-mediated silencing of a diverse class of selected proteins in an HCV replicon cell line either enhanced or inhibited HCV replication/translation, suggesting that these cellular factors have regulatory roles in HCV replication.
The hepatitis C virus, a blood-borne pathogen that causes chronic hepatitis, is the primary reason for liver transplantation in the United States. HCV 1 preferentially replicates in liver tissue without any direct cytopathic effect and thus is able to maintain long term, persistent infection. More than 50% of HCV-infected patients do not respond to treatment; instead, the majority of patients develop chronic hepatitis C, which leads to progressive liver fibrosis, cirrhosis, end-stage liver disease, and hepatocellular carcinoma.
The hepatitis C virus is a positive single-stranded RNA virus of a 9.6-kb genome. After its entry into cells, the (ϩ) strand RNA first serves as a messenger RNA for the translation of viral proteins. Newly synthesized HCV replicase (NS5B) then copies the (ϩ) strand RNA genome into the (Ϫ) strand RNA, which serves as a template for the production of the viral genome. The conserved 5Ј-and 3Ј-nontranslated (5ЈNTR and 3ЈNTR) regions of the HCV genome have multiple regulatory elements that are essential for replication of HCV and translation of viral proteins. Although the 5ЈNTR of HCV contains the internal ribosomal entry site, which is required for capindependent translation of (ϩ) strand HCV RNA (1)(2)(3)(4), it is also the 3Ј region of the (Ϫ) strand RNA, which functions as the initiation site for replication of the (ϩ) strand HCV RNA genome. The 3Ј regions of both (Ϫ) and (ϩ) strand HCV RNAs are highly structured and serve as the initiation sites for viral replication (5). Various cellular proteins have been shown to interact with 5ЈNTR of HCV RNA; these include La autoantigen (6) nuclear factors NF90, NF110, NF45, and RNA helicase A (7), as well as the polypyrimidine tract-binding protein (8 -10). Recently, we affinity-captured different cellular proteins interacting with HCV 3ЈNTR and identified them by LC/MS/ MS; some of these proteins were found to be essential for HCV replication as confirmed by siRNA (11).
Another recent study using sequence-specific gene silencing of the RNAi screen has identified 26 human genes encoding proteins that physically interact with HCV RNA or protein and modulate HCV replication (12). A more direct approach would be to capture the replicating HCV RNA genome in situ under physiological conditions and then identify all the cellular and viral factors associated with the viral genome. The structured HCV genome and the interplay of tightly regulated viral and host factors assembled on it should be highly specific within the cells. We present a novel strategy to affinity-capture the replicating HCV RNA and associated cellular and viral proteins in MH14 cells carrying actively replicating HCV replicons. We have identified these proteins by proteomics technology.
EXPERIMENTAL PROCEDURES
MH14 Cells-Cured MH14 and MH14 cells (a kind gifts from Makoto Hijikata, Japan) carrying replicative HCV subgenomic replicons were grown in DMEM (Cellgro) supplemented with 10% fetal calf serum, 100 g/ml each of penicillin/streptomycin, and 300 g/ml G418 (13,14). Cured MH14 cells were prepared by treating MH14 cells with 5,000 IU/ml of ␣-interferon for 2 weeks. The absence of replicon RNA and viral proteins was checked by Northern blotting, RT-PCR, and Western blotting (14). Cells were grown at 37°C with 5% CO 2 .
Peptide Nucleic Acid (PNA)-We conjugated a 15-mer PNA targeted to the HCV genome with neamine at the N terminus as described previously (15). The PNA-neamine conjugate contained biotin at the C terminus via the Lys residue ( Fig. 1). We obtained the PNA sequence (neamine-TACTCGTGCTTAGGA-Lys-biotin), which is complementary to the N-terminal HCV core coding region downstream of the 5ЈNTR in the MH14 HCV subgenomic replicon (Fig. 1C), on solid support from Panagene (South Korea) and conjugated it with neamine monomer essentially as described previously (15).
Preparation of 32 P-Labeled RNA Fragment Corresponding to 5ЈNTR-The HCV 5ЈNTR flanking the 3Ј N-terminal HCV core coding region was amplified by PCR from the pMH14 template using upprimer containing the T-7 promoter (CGG GAG AGC CAT AGT GG) and down-primer complementary to the HCV core coding region (GGT TTT TCT TTG AGG TTT AGG). The PCR product corresponding to domains III and IV of 5ЈNTR and the 36 nucleotides of the N-terminal coding sequence of the HCV core were transcribed to generate 244-base runoff transcripts, using the T7 transcription kit from Roche Applied Sciences. The RNA transcript was internally labeled by including [␣-32 P]UTP (3,000 Ci/mmol; Amersham Biosciences) in the reaction solution. Reactions were carried out according to the manufacturer's protocols. The transcripts were purified by phenol/chloroform extraction and ethanol precipitation, dissolved in diethyl pyrocarbonate-treated water, and stored at Ϫ80°C. Following treatment with RNase-free DNase I to remove template DNA, the RNA was precipitated with lithium chloride, resuspended in RNase-free water, and used to determine the binding specificity of Nea-PNA-biotin conjugate.
Gel Retardation Assay-The affinity and specificity of the anti-HCV Nea-PNA-biotin conjugate for its target sequence were evaluated by gel electrophoretic mobility shift analysis. The 32 P-labeled HCV-5ЈNTR RNA (20 nM; 10,000 Cerenkov cpm) was incubated with increasing concentrations of neamine-PNA-biotin conjugate in a buffer containing 30 mM Tris-HCl (pH 8.0), 75 mM KCl, 5.5 mM MgCl 2 , 5 mM DTT, 0.01% Nonidet P-40, and 500 ng of poly r(I-C) in a final volume of 15 l. After 30 min of incubation at 37°C, samples were subjected to gel electrophoresis on 6% polyacrylamide gel, using the Tris borate buffer system. The RNA-PNA complex was resolved at a constant voltage of 150 V at room temperature for 3 h and subjected to PhosphorImager analysis (GE Healthcare).
Determination of Nonspecific Binding of Cellular Proteins to Biotinylated PNA-neamine Probe Targeted to HCV Genome-Cell extract from cured MH14 cells prepared as described above was used to determine the nonspecific binding of cellular proteins to the PNA probe specific to the HCV RNA genome. For binding experiments, 100 pmol of biotinylated PNA-neamine probe was incubated with 20 l of cell extracts in binding buffer containing 1ϫ protease inhibitor mixture (mini-EDTA-free, Roche Applied Science), 1 mM DTT, 100 mM NaCl, 20 mM HEPES (pH 7.5), and 20 units/ml SUPERaseIN. The mixture was incubated on ice for 60 min, after which 75 l of streptavidin-coated paramagnetic bead suspension (Dynal, Invitrogen) was added to the mixture to capture the biotinylated PNA probe. The mixture was further incubated on ice for 30 min with occasional vortexing. Beads were then washed three times with the binding buffer. Elution was done by adding 30 l of binding buffer and 30 l of 2ϫ SDS gel loading dye to washed beads and heating at 95°C for 5 min. Following magnetic separation of beads, the supernatantcontaining eluted proteins were subjected to SDS-PAGE on 8 -16% polyacrylamide. The gel was stained with Sypro Ruby dye (Molecular Probes, Invitrogen) for visualization of protein bands. We also included 10 M biotin as well as a biotinylated DNA oligonucleotide with sequence identical to the PNA probe as controls.
Cellular Uptake and Localization of Nea-PNA Conjugate-The MH14 cells carrying stably replicating HCV subgenomic replicons were grown to 80% confluence in Dulbecco's modified Eagle medium containing 10% FCS. The cells were washed with PBS containing 2% FCS and incubated at 37°C with 2 M FITC-tagged Nea-PNA conjugate or naked PNA. After 3 h of incubation, the cells were washed, detached, and resuspended in the same buffer. Fluorescent signals per 10,000 cells were then obtained by FACScan. To determine cellular localization of the FITC-labeled PNA-neamine conjugate, the cells were washed with PBS and stained with DAPI and wheat germ agglutinin conjugated with rhodamine to label, respectively, the nuclear DNA (blue) and membrane glycoproteins (red). Uptake of FITClabeled PNA-neamine shows green fluorescence at 488 nm. The images were acquired on Nikon A1R confocal microscope.
Affinity Capture of HCV (ϩ) Strand RNA-Protein Complex-We gently washed the subconfluent MH14 cells with cold buffer containing 150 mM sucrose, 30 mM HEPES (pH 7.4), 33 mM NH 4 Cl, 7 mM KCl, and 4.5 mM magnesium acetate. We layered the washed cells with lysolecithin (200 g/ml) in the wash buffer for 5 min and then aspirated all the solution from the plates as described earlier (16,17). We then layered the cells with reticulocyte buffer containing 1.6 mM Tris acetate (pH 7.8), 80 mM KCl, 2 mM magnesium acetate, 0.25 mM ATP, 0.1 mM dithiothreitol, and 10 units of RNasin containing 0.5 M of anti-HCV PNA-neamine-biotin conjugate designed to capture (ϩ) strand HCV RNA-protein complex. After incubation at room temperature for 2 h, the cells were washed once with the same buffer, gently scraped from each plate, and lysed on ice. We centrifuged the lysed cells for 10 min at low speed (7,000 ϫ g). The supernatant (S7 fraction) was incubated on ice with 150 l of paramagnetic streptavidin beads for 1 h to capture the HCV RNA-protein complex bound to Nea-PNA-biotin conjugate. We washed the beads six times with the reticulocyte buffer containing 500 mM NaCl. The captured (ϩ) strand HCV RNA-protein complex was then eluted from the beads by adding 30 l of binding buffer and 30 l of 2ϫ SDS gel loading dye to the washed beads and heating at 95°C for 5 min before magnetic separation of beads from eluted proteins. Aliquots of the samples were subjected to SDS-PAGE on an 8 -16% polyacrylamide gel and stained with Sypro Ruby dye (Molecular Probes) as described (11).
Mass Spectrometry, Protein Identification, and Database Search-The RNA-protein complex eluted from the beads was entrapped in polyacrylamide gel and washed three times with 50% methanol containing 10% glacial acetic acid, two times with water, and two times with 50 mM ammonium bicarbonate in 30% acetonitrile. Reduction was done by incubating gel pieces for 30 min at 37°C in solution containing 10 mM DTT, 50 mM ammonium bicarbonate, and 30% acetonitrile. Subsequently, alkylation was done by incubating gel pieces for 30 min at 37°C in solution containing 45 mM iodoacetamide, 50 mM ammonium bicarbonate, and 30% acetonitrile. Gel pieces were then dehydrated by washing two times with 80% acetonitrile and drying at 60°C for 10 min. The dried gel pieces were subjected to trypsin digestion by adding solution containing 50 mM ammonium bicarbonate and 10 ng/ml trypsin (Trypsin Promega Gold MS grade). After overnight incubation at 37°C for digestion, reactions were adjusted to 1% trifluoroacetic acid for extraction of peptides from the gel. The extracted peptides were dried in a Speedvac and resuspended in 10 l of solvent A (2% acetonitrile, 0.1% formic acid) for LC/MS/MS analysis. In brief, the peptides were first separated by reversed phase liquid chromatography, capillary PepMap100 column (75 m ϫ 150 mm, 3 m, 100 Å, C18) (Dionex, Sunnyvale, CA) in a 60-min linear gradient from 10% solvent A to 40% solvent B (95% ACN, 0.1% formic acid). The reversed phase liquid chromatography eluant was directly introduced into a nano-ESI source on an API-US QTOF tandem MS system (Waters). The ESI capillary voltage was set at 3,000 V. The MS spectra (m/z 400 -1900) were acquired in the positive ion mode. Argon was used as the collision gas. The collision energy was set within a range between 17 and 55 V, depending on the charge states, and the m/z values of the ions were analyzed. MS/MS spectra were acquired in data-dependent mode, in which the top five most abundant precursors with two to five charges from each MS survey scan were selected for fragmentation. ProteinLynx Global server (PLGS) program version 2.1 was used to convert LC/MS/MS raw data into pkl files. These files were submitted for search by the MASCOT search engine (versions 2.3.0 and 1.9.0) against the NCBInr database (May, 2011, 14261927 entries) with taxonomy limited to human or hepatitis C virus (237,402 or 53,243 entries). The following MASCOT search parameters were used: peptide mass tolerance, 200 ppm; fragment mass tolerance, 0.6 Da; trypsin cleavage with a maximum of two missed cleavages; variable modifications (of S-carbamidomethyl on cysteine and oxidation on methionine). Peptide ion scores Ͼ-40 indicating identity or extensive homology (p Ͻ 0.05) were considered significant. Protein identifications were accepted on the basis of at least two identified peptides. The false discovery rate was less than 1.0% at the peptide level and less than 1.0% at the protein level. Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped.
siRNA Transfection-The MH14 cells (2 ϫ 10 5 /well) carrying replicating HCV replicons were grown in a 6-well plate for 24 h and then transfected with 20 nM siRNAs targeted against STAU1, ADAR1, DDX6, PA2G4, and HSP60 and IGF2G according to the manufacturer's protocol, using siPORT amine as the transfection reagent (Ambion). The transfected cells were further grown for 72 h. One set of cells were washed, lysed, and analyzed for total protein (BCA protein assay; Pierce). An equal quantity of protein from each set was used for Western blot analysis. Another set of cells was processed for the isolation of total mRNA and subsequent RT-PCR analysis for HCV RNA and actin mRNA or GADPH mRNA.
Sequence Specificity of PNA-neamine Conjugate Targeted
to Core Coding Region of HCV-We used a new class of DNA mimic, PNA, for in situ capture of the replicating HCV genome and associated proteins from MH14 cells. The chargeless PNA molecule has no sugar phosphate backbone in which purine and pyrimidine bases are linked via peptide bonds ( Fig. 1A) (18). Instead, the oligomeric PNA irreversibly binds to the complementary RNA or DNA sequences with very high affinity (19). The cellular uptake of naked PNA is negligible. Earlier, we showed that PNA conjugated with neamine or glucosamine is efficiently taken up by the cells without endosomal entrapment (15,20) . We conjugated 15-mer biotinylated PNAs with neamine at the N terminus and with biotin at the C terminus We incubated 20 nM of the 32 P-labeled RNA fragment corresponding to domains III and IV of 5ЈNTR and 36 nucleotides of the N-terminal coding sequence of HCV core with different concentrations of PNA-Nea HCV-Core in binding buffer at room temperature for 15 min. We then analyzed the bound PNA-RNA complex by gel retardation. As shown in Fig. 2A, the binding of PNA-Nea HCV-Core to its target sequence was highly specific, with a binding stoichiometry of 1:1. In the presence of 10 and 15 nM concentrations of PNA-Nea HCV-Core , the respective extents of gel retardation were 50 and 75% of the total 20 nM labeled RNA ( Fig. 2A, lanes 2 and 3); at a ratio of 1:1 or higher, all the 32 P-labeled RNA was in the form of the PNA-RNA complex (Fig. 2, lanes 4 and 5). The scramble PNA with similar base composition failed to bind the target sequence under similar conditions (Fig. 2B).
Cellular Proteins Do Not Bind to Sequence-specific PNA-Nea-HCV-Core Conjugate-We determined whether cellular proteins could recognize the base sequences on the PNA-Nea HCV-Core probe and bind to it nonspecifically. We incubated the biotinylated PNA-nea conjugate with cell lysate from cured MH14 cells devoid of HCV subgenomic replicons. As a control, we also incubated the cell lysate separately with biotin as well as with a biotinylated oligonucleotide DNA probe with nucleotide base sequence corresponding to the PNA-Nea HCV-Core probe. The biotin and biotinylated probes were then captured from the incubation mixture by paramagnetic streptavidin beads. After the beads were washed, cellular proteins associated with the HCV RNA bound with the biotinylated PNA-neamine probe were resolved on SDS-PAGE. The gel was stained with Sypro Ruby. As shown in Fig. 3, many cellular proteins were captured on the oligonucleotide DNA probe (lane 1). In contrast, fewer numbers of cellular proteins captured on the biotinylated PNA probe (Fig. 3, lane 2) were due to their affinity for biotin (lane 3). The LC/MS/MS analysis of these proteins identified many of them as biotinbinding metabolic enzymes (supplemental Table 1).
PNA-neamine Conjugate Is Efficiently Taken Up by the Cells and Distributed in Both Cytosol and Nucleus-For uptake studies, we prepared fluorescein-tagged conjugate of a 15mer PNA-neamine. The fluorescein probe was attached to the PNA moiety of the conjugate (15). The fluorescein-labeled conjugate was dissolved in water, and its concentration was determined by absorption of fluorescein at 490 nm ( ϭ 67,000) and absorption of PNA at 260 nm ( ϭ 171,200). Similar molar concentrations obtained by these two methods established their accuracy and indicated the absence of free fluorescein in the preparation. Using this fluorescein-tagged PNA-neamine conjugate, we did a series of experiments to determine the uptake efficiency of the conjugate using flow cytometry. The fluorescein-tagged naked (unconjugated) PNA was used as a control. The results, shown in Fig. 4, A and B, indicate that the PNA-neamine conjugate supplemented in the medium is efficiently taken up by Huh7 cells. At a 2 M concentration of the conjugate, nearly 80% of cells were fluorescence-positive within 3 h of incubation. Uptake of fluorescein-tagged naked PNA at 5 M concentration was negligible. We also determined the localization of the conjugate in the cells. As shown in Fig. 4C, the conjugate is uniformly distributed in the cytosol and also localized in the nucleus upon prolonged incubation.
In bly replicating HCV subgenomic replicons and cured MH14 cells (HCV-negative) in our experiments on affinity capture by PNA-Nea HCV-Core conjugate. The biotinylated PNA-neamine conjugate complementary to nucleotide sequence 342-356 of HCV (ϩ) strand RNA was incubated with MH14 cells to capture the HCV RNA protein complex in situ. We also incubated the conjugate probe with cured MH14 cells that were devoid of HCV subgenomic replicons. The cells were washed and lysed. The biotin-PNA-neamine conjugate internalized in the cells was captured from the lysate by paramagnetic streptavidin beads. After washing the beads, the PNA-bound RNAprotein complex released in Laemmli gel loading buffer was either resolved by SDS-PAGE, followed by gel staining with Sypro ruby, or subjected to LC/MS/MS.
The PNA-Nea HCV-Core conjugate efficiently penetrated the cells and captured the HCV-RNA protein complex in situ from MH14 cells (Fig. 5). Proteins bands associated with the captured HCV (ϩ) RNA genome from MH14 cells could be seen in the gel (Fig. 5, lane 3). Binding of the RNA-protein complex to the PNA probe was tight enough to withstand washing with 0.5 M salt, as only few protein bands could be seen in the washes (Fig. 5A, lanes 4 and 5). In contrast, the affinity capture lane from cured MH14 cells devoid of HCV replicons showed few protein bands in the gel (Fig. 5B, lane 3). LC/ MS/MS analysis of these proteins identified 23 cellular factors associated with biotinylated PNA-streptavidin complex (supplemental Table 1). Among these, eight were biotin-binding metabolic enzymes, including pyruvate carboxylase, acetyl-CoA carboxylase, methylcrotonyl-CoA carboxylase, and propionyl-CoA carboxylase. These proteins were excluded from the list of cellular factors identified as being associated with HCV (ϩ) strand RNA genome.
Identification of Cellular or Viral Proteins Associated with HCV (ϩ) Strand RNA Genome-For identification of specific cellular and viral proteins associated with the HCV RNA genome, the RNA-protein complex released in Laemmli gel loading buffer was processed for LC/MS/MS analysis to achieve the highest level of confidence in our identification. We used LC/MS/MS tandem mass spectrometric detection. The LC/MS/MS approach has three distinct advantages. It separates the tryptic peptides before mass spectrometric analysis, provides sequence information for fragmented peptides, and identifies proteins in protein mixtures. These proteins are listed in Table I with their accession numbers obtained from the protein database (NCBI). We identified three HCV proteins (NS3-4a protease-helicase, NS5A, and NS5B) and 83 cellular proteins in the affinity capture. Many of the cellular proteins belong to transcriptional regulators, as do the far upstream element-binding protein 1 (FBP1), hnRNP-U, hnRNP C-like 1, nuclear co-repressor KAP-1, the La ribonucleoprotein domain family, and the subunit 1 isoform of CCR4-NOT transcription complex. FBP1, a transactivator of c-myc gene transcription, has been shown to be expressed in hepatocellular carcinoma cells and enhance HCV replication (21).
As expected, many of the identified cellular factors interacting with the HCV (ϩ) strand RNA genome are translation factors, such as elongation factors (eEF2 and EFTu) and initiation factors (eIF3 subunits CELM, eIF3-p44, eIF2 subunit 3, and eIF3-p110). Among these, the initiation factors eIF3 and eIF2 complexed with GTP and tRNA-Met have been shown to be recruited by the HCV IRES to assemble an 80 S ribosome translation complex (22)(23)(24). The signaling group of proteins was also found to be associated with the HCV genome. This included cell cycle protein PA-2G4, UBAP2L (NICE-4), and NOMO2. Among these, PA-2G4, which is involved in RNA processing and signaling, has been shown to be associated with HCV IRES (25), whereas UBAP2L (NICE-4) has been identified as one of the nine gene products associated with the progression of hepatocellular carcinoma (26). Another group of proteins that we identified belongs to chaperon proteins, ribosomal proteins, RNA helicases, DEAD box proteins, oncogenic proteins, and various RNA-binding proteins and metabolic enzymes. Together, they represent a diverse set of cellular factors associated with the HCV RNA genome.
Bioinformatics Analysis of the Identified Proteins-Ingenuity pathway analysis of the identified proteins indicated their association with 12 different molecular and cellular functions (Fig. 6B). Although 15% of the proteins were involved in replication of RNA viruses, 6% of them were directly linked with HCV replication. Another major group of proteins was associated with different cancers, including digestive organ tumors. The canonical pathway analysis has matched the identified protein associated with 24 different pathways with significant Ϫlog (p) value (Fig. 6C). The highest Ϫlog(p) values were for the protein ubiquitination pathway, endothelial nitricoxide synthase signaling, eukaryotic initiation factor 2 (eIF2) signaling, and regulation of eIF4 and p70 S6K signaling involved in the initiation of protein synthesis. Ingenuity pathway analysis also demonstrated that four major disease and disorder developments were significantly associated with our affinity-captured proteins. The majority of them are related to cancer (46%), infectious disease (30%), reproductive system disease (15%), and hepatic system disease (7%) (Fig. 6A).
Silencing of Diverse Class of Identified Cellular Factors Modulates HCV Replication and Translation-Using siRNA, we silenced two RNA editing factors (ADAR1 and Stau1), an RNA helicase (DDX6), a cell cycle signaling protein (PA2G4), a molecular chaperone (HSP60), and a regulator of mRNA stability, translation, and turnover (IGF2BP1) to examine their effect on HCV replication and expression of the viral protein NS5A. Three of the cellular factors (DDX6, PA2G4, and IGF2BP1) were positive controls as they have been implicated on HCV replication (25,27) or translation (28), although others (Stau1, ADR1, and HSP60) were considered as the novel targets. The siRNA was delivered into MH14 cells, which carry actively replicating HCV replicons. Fig. 7 shows that expression of all the siRNA targeted genes was reduced by more than 95% as demonstrated by Western blot analysis (lane 4). We also determined HCV RNA replicon level by RT-PCR and viral protein expression level by Western blotting for HCV NS5A. Although we noted a direct correlation between reduction in HCV RNA replication and reduced level of expression of Stau1, DDX6, and HSP60 in MH14 cells, there was an inverse relation between HCV replication and reduced expression of ADAR1 and PA2G4. We found that down-regulation of IGF2BP1 significantly reduced HCV translation as judged by Western blotting of NS5A but had no effect on HCV replication. The down-regulation of ADAR1 and PA2G4 significantly enhanced both HCV replicon RNA and expression of the viral protein NS5A. ADAR1, which catalyzes the deamination of adenosine in double-stranded RNA (29), has been suggested to be involved in IFN-␣-mediated clearance of HCV RNA (30). The down-regulation of ADAR1 resulted in 2-and 3-fold stimulation of HCV RNA replication and viral protein expression, respectively. PA2G4, a proliferation-associated signaling protein, has been shown to induce cell cycle arrest in the G 2 /M
TABLE I Cellular and viral proteins associated with HCV (ϩ) RNA genome
Most of the host cell proteins listed had a minimum of two peptides matching. The exceptions were Apobec-1 complementation factor (gi͉6996658), NICE-4 protein (gi͉11990132), nuclear co-repressor KAP-1 (gi͉1699027), RAS-p21 activating protein (gi͉5031703), and chaperonin T-complex polypeptide (gi͉5453603), each of which had a single peptide match (supplemental file 1). phase of the cell cycle (31). Down-regulation of PA2G4 significantly enhanced both HCV RNA replication and viral protein expression in MH14 cells. Stau1, a double-stranded RNAbinding protein, must interact with influenza virus protein NS1 for efficient virus replication (32). We found that down-regulation of Stau1 reduced HCV replication to a nearly undetectable level, indicating its involvement in HCV replication. DDX6 (Rck/p54), one of the several RNA helicases associated with the HCV RNA genome, is overexpressed in human hepatocytes from patients with chronic hepatitis C (33). We found that down-regulation of DDX6 resulted in more than 90% inhibition of both HCV replication and expression of viral protein. HSP60 is a molecular chaperone required for cell survival during stress, which it achieves by restraining p53 function (34). Down-regulation of HSP60 in MH14 cells resulted in more than 90% reduction in HCV replication and 75% reduction in the expression of viral protein NS5A. IGF2BP1, which is a regulator of RNA stability, turnover, and translation, has been shown associated with HCV RNA and involved in IRES-mediated HCV translation (28). We found that down-regulation of IGF2BP1 had no effect on HCV replication, but it caused ϳ60% reduction in the expression of viral protein NS5A suggesting its specific role in regulation of translation of HCV proteins. DISCUSSION Earlier, we used in vitro transcribed HCV 3ЈNTR annealed with biotinylated oligo-DNA as bait to capture interacting cellular proteins from cell lysate (11). Although this strategy identified many cellular proteins interacting with HCV 3ЈNTR, some proteins interacted with the oligo-DNA probe alone; they were also identified and subtracted from the list as nonspecific binders.
In this study, we devised a novel strategy to capture the replicating HCV (ϩ) strand RNA genome in situ and identified associated cellular or viral factors. This strategy, which uses a a The results are based on two separate experiments. The listed proteins are those scored in both experiments. The cellular proteins from cured MH14 cells (HCV-negative) associated with biotinylated PNA-streptavidin complex are shown in the supplemental Table I. sequence-specific biotinylated PNA conjugated with polycationic neamine moiety of neomycin, has four advantages as follows: 1) PNA, being an unnatural DNA mimic with no sugar phosphate backbone, is not recognized by cellular proteins and does not have any affinity for them, so that background signal due to nonspecific protein binding was eliminated; 2) binding of PNA to its target is stoichiometric and irreversible under physiological conditions; 3) the PNA-neamine conjugate in either free form or bound to its target sequence is highly stable and completely resistant to cellular nucleases and proteases; and 4) most significantly, PNA-neamine conjugate efficiently penetrates the cells and binds to its target RNA in the cytosol, which can then be quantitatively recovered from the cell lysate. We used this strategy specifically to capture the HCV (ϩ) strand RNA genome within MH14 cells carrying a stably replicating HCV subgenomic replicon. Following incubation with the conjugate, the cells were lysed, and the biotinylated PNA-neamine molecules in the cell lysate were recovered by immobilizing them on paramagnetic streptavidin beads. This approach to capturing the HCV genomic RNA-protein complex has positively identified many of the cellular and viral factors associated with the viral genome.
A recent report identified various cellular factors that affect HCV infection and replication in cultured hepatoma cells (12). Also, a combination of proteomics and computational modeling has identified novel host proteins that function as key regulators of HCV-associated metabolic changes (35). Con- FIG. 6. Bioinformatics analysis of the identified proteins. The affinity-captured proteins identified by LC/MS/MS were matched against the Ingenuity pathway database of disease and disorder (A), molecular function (B), and canonical pathways (C) that were most significant to the set of identified proteins. The canonical pathway of the identified proteins with a p value for each pathway is indicated by the bar and is expressed as Ϫ1 times the log of the p value. The line indicated with the arrow represents the ratio of the number of genes in a given pathway that meet the cutoff criterion (p Ն 0.05) divided by the total number of genes that make up that pathway due to chance alone. The percent of total protein matched in each category in the dataset of disease and disorder and molecular function is indicated. The gene symbol of each protein that matched against molecular function (B) and individual pathway (C) is also shown. sistent with these observations, we have identified 83 cellular proteins and three viral proteins (NS5B, NS5A, and NS3-4a helicase) that are associated with replicating HCV (ϩ) strand RNA genome in cultured hepatoma cells. As expected, some ribosomal proteins and translation factors were associated with HCV (ϩ) strand RNA. Besides these, several metabolic enzymes were also scored among the host factors associated with the viral genome.
Some of the identified RNA-binding proteins, including H1, M4, and K, belong to the hnRNP group. The hnRNP group of proteins is important in regulating HCV translations or replication. Among these, hnRNP K physically interacts with HCV core protein (36), although hnRNP C-like protein, which binds large pyrimidine-rich region HCV 3ЈNTRs, may function in the initiation and/or regulation of HCV RNA replication (37). The hnRNP U acts as a basic transcriptional regulator that re-presses basic transcription driven by several viral and cellular promoters (38). It has also been found to be associated with HCV IRES-binding proteins and is involved in IRES-dependent down-regulation of HCV translation (25). The hnRNP M4 interacts with cell membrane receptors to trigger signaling pathways that promote metastasis (39).
Other RNA binders that score in proteomics include the following: KOC (KH homology), a cancer-related autoantigen that is overexpressed in cancer cells (40); nuclear factor 45 (NF45), which has been shown to be part of HCV replication machineries involved in the regulation of viral translation and RNA replication (7); high density lipoprotein-binding protein (Vigilin), known as a marker gene product of HCV-associated hepatocellular carcinoma (41); and IGF2BP1, which not only affects mRNA nuclear export, localization, stability, and translation but is associated with HCV replicon RNA and enhances IRES-mediated HCV translation (28). T-cluster-binding protein was also scored as HCV RNA binder, which is one of the IFN-stimulated gene products specifically expressed in chronic hepatitis C (42).
RNA and DNA helicases are another important group of proteins found to be associated with HVC (ϩ) strand RNA. Among these, DDX3 has been implicated in several processes that regulate gene expression. This protein has been the prime target for many viruses, including HCV, HBV, and HIV, which interact with DDX3 and modulate its function (43). HCV core protein specifically interacts with DDX3 and may be involved in regulating host cell mRNA translation (44). DDX6 (Rck/p54) is a cellular RNA helicase with ATP-dependent RNA-unwinding activity (45) that has been suggested to function as a proto-oncogene. It is overexpressed in human hepatocytes from patients with chronic hepatitis C (33) and in some colorectal cancers (46); also, its helicase activity is essential for efficient HCV replication (27). The RNA helicase DDX5 (growth-related nuclear 68 protein), which interacts with the C-terminal region of HCV NS5B, has been suggested to be part of the HCV replicase complex (47), whereas Ras-GTPase-activating protein-binding protein 1 (G3BP1) interacts with both HCV NS5B and the 5Ј end of the HCV minusstrand RNA, suggesting that it is part of HCV replication complex (48). Another RNA helicase DDX30 that we scored has been shown to have an inhibitory effect on HIV-1 packaging and to reduce viral infectivity (49). A similar role for this helicase may be predicted in the HCV life cycle, wherein sequestering of DDX30 by the HCV genome may reduce its restrictive function.
The signaling group of proteins found to be associated with the HCV genome included the cell cycle protein PA-2G4 homolog, which is associated with HCV IRES (25); UBAP2L (NICE-4), which has been identified as one of the nine gene products associated with the progression of HCC (26); and nodal modulator 2 isoform (NOMO 2), which participates in the nodal signaling pathway during vertebrate development and is one of the signature proteins for vascular invasion of hepatitis C virus-related HCCs (50).
The cellular proteins belonging to the RNA editing group were also found to be associated with HCV genomic RNA. One such factor is an adenosine deaminase that acts on double-stranded RNA (ADAR1) and is induced by INF-␣. The ADAR1 catalyzes deamination of adenosine in doublestranded RNA (29), which is then targeted for degradation by specific cellular RNase. It has been suggested that the inosine editing function of ADAR1 is involved in successful in vitro clearance of HCV RNA by IFN-␣; this protein promises to offer a new therapeutic strategy for viral infections (30). Another RNA editing factor identified was Staufen, a double-stranded RNA-binding protein involved in mRNA transport and localization. This protein is overexpressed in HIV-1-infected cells and incorporated into the package of HIV-1 virions (51). It has also been shown to interact with influenza virus protein NS1 and is required for efficient virus replication (32). A similar role of Staufen in HCV replication may be suggested as we found that its down-regulation in MH14 cells carrying replicating HCV replicons drastically reduced HCV replication (Fig. 7). It is possible that Staufen may be involved in HCV RNA dimerization to regulate the molecular transition from the synthesis of (ϩ) strand and (Ϫ) strand viral RNA to viral RNA translation. The APOBEC1 complementation factor that we have identified is an essential component of an editing complex involved in introducing site-specific deamination of cytosine in mammalian apolipoprotein B mRNA. Although the human liver is deficient in APOBEC1 expression, its interacting partner, APOBEC1 complementation factor, is abundantly expressed and suppresses apoptosis in liver cells (52,53). However, hepatitis C virus triggers the expression of APOBEC1 in hepatocytes and chronic hepatic inflammation caused by HCV infection (54).The cytidine deaminase activity expression induced by APOBEC family members may function as a genome mutator that generates somatic mutation in targeted host genes, thus contributing to tumor genesis (55).
As expected, we also found an array of transcriptional regulators to be associated with the HCV genome. The FUSEbinding protein (FBP) is a transcription transactivator of the c-myc gene (56,57). We have demonstrated that FBP interacts with HCV NS5A and significantly enhances HCV replication while strongly inhibiting translation of viral proteins (21). FBP is not expressed in normal somatic cells but is overexpressed in HCC and required for tumor growth (58). Another protein that negatively regulates cellular transcription is the La-related protein LARP7, which was associated with viral RNA. LARP7 binds to 7SK RNA and reverses its antagonistic effect on P-TEFb-mediated stimulation of transcription elongation by RNA polymerase II (59). The RNAi-mediated silencing of LARP7 stimulated polymerase II transcription of cellular as well as viral (HIV-1) genes. Another protein in this group was KAP-1, which interacts with STAT1 and negatively regulates interferon (IFN)/STAT1-mediated interferon-regulatory factor-1 (IRF-1) gene expression (60). Therefore, KAP1 could be one target host protein of HCV infection, which controls IRF-1 activation. CNOT1 is one of the nine components of the CCR4-NOT complex, which functions as a global regulator of gene expression (61) and is required for HCV infection (62).
Cellular factors involved in a variety of transport functions were also affinity-captured with the HCV RNA genome. These are UBQRC2, a subunit of the mitochondrial respiratory chain protein ubiquinol-cytochrome-c reductase complex III, which is involved in electron transfer from ubiquinol to cytochrome c (63); nucleoporin-like protein, which is required for the export of mRNAs containing poly(A) tails from the nucleus into the cytoplasm (64), as well as docking of HIV-1 Vpr at the nuclear envelope (65); and valosin-containing protein (VCP), also known as transitional endoplasmic reticulum ATPase (TER ATPase) or p97, which is an enzyme involved in vesicle transport and fusion, 26 S proteasome function, assembly of per-oxisomes, and various cellular events that are regulated during mitosis (66). These cellular factors also include Caprin 1, a cell cycle-associated phosphoprotein required for normal progression through the G 1 -S phase of the cell cycle (67). Caparin 1 forms a complex with G3BP1 (68), which has been shown to interact with both HCV NS5B and the 5Ј end of the HCV (minus) strand RNA (48).
Oncogenic proteins were another group of cellular factors associated with HCV RNA. Among them was the developmentally regulated GTP-binding protein (DRG1), which is critical in cell growth, associated with stem cell leukemia/T-cell acute lymphoblastic leukemia 1 (SCL/TAL1), and stimulates the co-transforming activity of c-Myc and Ras (68). Its abnormal expression triggers the disruption of normal growth control. ELAC2 has been shown to be associated with prostate cancer as mutation in the gene increases the risk of prostate cancer (69). Another oncogenic protein, KIAA1401, is also a transcriptional system regulator 1 (TSR1) and is required during maturation of the 40 S ribosomal subunit in the nucleolus; it also is associated with breast and thymus cancer (70). Endoplasmic reticulum lipid raft-associated 2 isoform 1 (ERLIN2) has been identified as one of the most potently transforming oncogenes expressed in breast cancer (71). Autoantigen is a nuclear protein specifically expressed in cancer cells during the S and G 2 phase. Its N-terminal region of SG2NA (amino acids 1-391) acts as a strong transcriptional activator in both yeast and mammalian cells (72).
We have also scored various chaperon proteins associated with the viral genome. Most notable among them are HSP70 protein 5, HSP70 protein 8 isoform 1, and HSP60. These heat shock proteins (HSPs), designated as chaperones, are required for cell survival during stress and for protein folding, degradation, and reactivation of misfolded proteins (73). HSP70 is highly expressed in hepatocellular carcinoma as compared with its level in normal or benign livers (74). HSP70 physically interacts with NS5A and has been implicated in HCV IRES-mediated translation (75). The chaperonin T-complex polypeptide (TCP1), also known as TCP1 ring complex (TRiC), participates in HCV RNA replication and virion production, possibly through its interaction with NS5B (76). Another molecular chaperone, calnexin, which has a major role in controlling the quality of folding of HCV glycoproteins (77), was also affinity-captured with HCV RNA.
We have also found several translation factors, ribosomal proteins, and metabolic enzymes associated with HCV genomic RNA. Indeed, the information we have generated regarding the identity of cellular factors associated with replicating HCV (ϩ) RNA subgenomic replicons will provide a strong basis for numerous hypothesis-driven studies on the interactions of cellular factors with viral RNA and proteins, as well as the functions of these factors in establishing chronic HCV infection and promoting its progression to LC and HCC. The major challenges are to determine the hierarchical importance of these interactions between HCV and host cell fac-tors, to delineate how these interactions affect patients infected with HCV, and to determine which of these interactions may be potential targets for therapeutic intervention. * This work was supported, in whole or in part, by National Institutes of Health Grant AI073703 from NIAID and Grant DK083560 from NIDDK.
□ S This article contains supplemental material. ‡ To whom correspondence should be addressed: Dept. of Biochemistry and Molecular Biology and Centre for the Study of Emerging and Re-emerging Pathogens, UMDNJ-New Jersey Medical School, 185 South Orange Ave., Newark, NJ 07103. Tel.: 973-972-0660; Fax: 972-972-5594; E-mail: pandey@umdnj.edu. | 2014-10-01T00:00:00.000Z | 2013-02-21T00:00:00.000 | {
"year": 2013,
"sha1": "8c2266110d5ea5be277d783002fe909103e5b5e0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.m112.017020",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "530c1f2f4026e4b954655a040f01e4f07f7f7ab2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
8920094 | pes2o/s2orc | v3-fos-license | Chest and occipito-frontal circumference measurements in the detection of low birth weight among Nigerian newborns of Igbo ethnicity
Background The World Health Organisation has recommended the use of anthropometric measurements as birth weight surrogates. However, it has been found that cut-off points for these anthropometric measurements vary across nations and ethnic groups. Objectives To determine the predictive values of chest circumference (CC), occipito-frontal circumference (OFC) and their combinations for low birth weight (LBW) detection in Igbo newborns. Methods Live newborns of Igbo origin were recruited within 24 hours of delivery. Their CC, OFC and weight were measured. Cut off points for predicting low birth weight was determined using ROC analysis. Results A total of 511 live newborns were recruited. For birth weight <2500 g, cut-off values were: CC 30.9 cm; OFC 33.8 cm; summation of CC and OFC 64.9 cm; ratio of CC to OFC 0.92. For weight <2000 g, the cut-off values were: CC 29.6 cm; OFC 32.8 cm; summation of CC and OFC 63.7 cm; ratio of CC to OFC 0.91. CC correlated best with birth weight (r = 0.918). Conclusion CC is the best predictor for LBW.
Introduction
Birth weight is a critical determinant of survival, growth and development of the newborn and also a valuable indicator of maternal health, nutrition and quality of antenatal services [1]. Newborns weighing less than 2500 grams are described as low birth weight (LBW) and have a greater risk of morbidity and mortality [2]. Thus birth weight measurement is an important screening tool for detecting the newborn at risk with special reference to low birth weight.
More than 20 million newborns worldwide are LBW and it is the single most important underlying risk factor for neonatal deaths [3,4]. It is estimated that 90% of this global burden occurs in developing countries [5] where on the average, 58% of newborn infants are not weighed at birth [3]. The reasons adduced for this are absence of trained personnel, or that weighing scales may nonfunctional or unavailable at places of delivery [6][7][8][9].
This challenge notwithstanding, hospital based studies in Nigeria have shown that LBW is responsible for 63% of infant mortality as well as 45.2% of perinatal deaths and carries a 37-fold increased risk of death in the first year of life [10][11][12]. These findings agree with a World Health Organization (WHO) estimate that almost half of newborn mortality is associated with preterm or low birth weight babies [13].
An additional advantage of early identification of LBW babies especially in resource-poor settings is to enable prompt referral which may determine survival [14]. For practical purposes some authors recommend 2000 g as the basis for hospitalizing LBW babies [7,15]. To improve detection of LBW especially in resource-poor countries, alternative measurements have been studied in different racial groups and include chest circumference (CC) [6,15,16] , occipito-frontal circumference (OFC) [17,18], mid arm circumference (MAC) [6,19] and maximum thigh circumference (MTC). CC is preferred because the landmark is easily identified and has less chance of measurement errors [6,20]. The combination of OFC and CC has also been found to be a good predictor for estimation of birth weight in view of the simplicity and non-invasiveness of measuring these two body circumferences [21].
This study was designed to correlate birth weight with CC and OFC and their combinations as summation and ratio. Their suitability in detecting potential LBW newborn babies in a predominantly Igbo ethnic group domain was determined.
Subjects and methods
This was a hospital based, cross-sectional and descriptive multi centre study carried out in two tertiary (University of Nigeria Teaching Hospital (UNTH), Enugu, Enugu State University Teaching Hospital (ESUTH), Enugu) and one secondary health facility (Mother of Christ Specialist Hospital (MCSH), Enugu). They are all equipped with infrastructures to cater to different aspects of medicine, including Obstetrics and Paediatric practice for the state and its environs. The study was carried out between 1 st September and 31 st December 2011, at the three study centres.
Inclusion criteria include live newborns delivered at the study centers, irrespective of gestational age, sex or mode of delivery and whose parents are of Igbo tribe. Babies with gross congenital abnormalities and those whose parents refused consent were excluded in the study.
Ethical approval was obtained from the Research and Ethics Committee of the three hospitals before commencement. Informed consent was obtained from the parent/guardian of each subject before recruitment. All newborn babies who met the study criteria were recruited within the first 24 hours of delivery. The data collected included CC, OFC and weight measurements.
Body measurements (CC, OFC and weight)
To ensure reliability and avoid inter-observer bias, all measurements were taken by one researcher alone. In addition, the anthropometric measurements were recorded before recording the birth weight to minimise potential intra-observer bias. The measurements were taken within the first 24 hours of delivery because of postnatal changes in body water composition and balance [22,23]. A particular sequence of taking measurements was adhered to: OFC first, followed by CC then weight. This was to minimise exposure time and reduce risk of hypothermia. All measurements were taken with the subject lying down.
Chest circumference was measured at the level of the nipple, at the end of expiration, to the nearest 0.1 cm using a non-elastic, flexible, fibre glass measuring tape according to standard techniques described by Forfar [24].
Occipito-frontal circumference was measured as the maximum circumference of the head to the nearest 0.1 cm with a non-elastic, flexible, fibre glass measuring tape passing above the supra-orbital ridges and over the maximum occipital prominence.
All the newborns were weighed naked on a Waymaster infant spring weighing scale to the nearest 50 grams.
Gestational age assessment was corroborated by physical assessment using the New Ballard Score [25]. Where there was discordance between the gestational age by date and the New Ballard Score, the later was used. Social classification was done using the socioeconomic index scores designed by Oyedeji [26].
Data analysis
All the data obtained was recorded and analyzed using the Statistical Package for Social Sciences (SPSS) version 19.0 and SYSTAT version 13. Continuous variables (CC, OFC) were reported as mean and standard deviation while categorical variables were reported as the number or percentage of subjects with a particular characteristic. The combination of CC and OFC as summation and ratio were also analysed. Chi square was used to test for association between Weight categories and Sex distribution of the newborns. Continuous variables were compared using student's t test and one-way ANOVA while prediction of birth weight by anthropometric variables was done using linear regression analysis. A p-value less than 0.05 was accepted as significant. Receiver operating characteristic curve analysis was used to identify the cut-off values for the different anthropometric measurements to predict LBW. The sensitivity, specificity and predictive values were calculated at serial cut-off points while the area under the curve was determined to evaluate the overall accuracy. Results were presented as prose, tables and figures as appropriate.
Characteristics of study population
A total of 511 newborns who met the inclusion criteria, out of 857 newborn deliveries in the three centres within the study period. One hundred and eighty three babies (35.8%) were recruited from ESUTH, while 178 (34.8%) and 150 (29.4%) were recruited from MCSH and UNTH respectively.
There were 267 males and 244 females giving a sex ratio of 1.1:1. Fourteen percent were of LBW, 82.6% were normal birth weight while 3.3% were macrosomic. There was no significant gender difference in the weight categories (χ 2 = 2.984, p = 0.225), see Table 1. Forty nine (9.6%) of the births were preterm, 448 (87.7%) were term, while 14 (2.7%) were post term. The birth weights (BW) of the subjects ranged from 650 g to 4500 g, with a mean BW of 3110.50 ± 617.51 g. The mean BW of the males (3205.61 ± 614.60 g) was higher than that of the female babies (3006.07 ± 604.13 g), t = 3.678, p = 0.000.
Anthropometric parameters for different weight categories
The total mean CC was 33 ± 2.8 cm while the total mean OFC was 34.7 ± 2.0. CC/OFC (ratio) and CC + OFC (summation) had total means of 0.94 ± 0.05 and 67.8 ± 4.6 respectively. Analysis of variance showed statistically significant difference among the three weight categories with respect to the measurements Table 2.
Linear regression analysis
Four linear regression models were created, one each for CC, OFC, CC + OFC (sum) and CC/OFC (ratio) as independent variables and birth weight as dependent variable. The highest coefficient of correlation (R) and coefficient of determination (R 2 ) were associated with CC followed by the sum of the circumferences, OFC and ratio of circumferences in that order. All the correlations were significant at p < 0.001. Also, the lowest standard error of the estimate (SEE) was observed with CC, followed by CC + OFC, OFC and CC/OFC ratio in that order. CC had a higher coefficient of determination (R 2 ) when compared with OFC. Summation of these two variables (CC and OFC) had a higher coefficient of determination (R 2 ) than OFC, however the R 2 was lower than that of CC. The ratio of the parameters gave a coefficient of determination (R 2 ) less than 0.5. The multiple regression model using CC and OFC as independent co-variables produced a higher coefficient of determination (R 2 ) and lower standard error of the estimate (SEE) than any of the other four simple linear regression models (Table 3).
Four scatter plot graphs were created, each representing CC, OFC, CC + OFC (sum), CC/OFC (ratio) for newborns weighing less than 2500 g (Figures 1a-d). There was a linear relationship between the anthropometric measurements and birth weight as shown by the positive gradients of the scatter plot diagrams. The highest coefficient of determination (R 2 ) was associated with CC followed by the sum of the circumferences, OFC and ratio of circumferences in that order.
Four scatter plot graphs were created, each representing CC, OFC, CC + OFC (sum), CC/OFC (ratio) for newborns weighing less than 2000 g (Figures 2a-d). There was a linear relationship between the anthropometric measurements and birth weight as shown by the positive gradients of the scatter plot diagrams. The highest coefficient of determination (R 2 ) was associated with CC followed by the sum of the circumferences, OFC and ratio of circumferences in that order. Table 4 shows that both CC and summation of CC and OFC had the best discrimination for birth weight less than 2500 g. Although the AUCs for CC and summation were equal, their shapes were not identical (Figures 3a and c). In this situation, the test with the higher accuracy at the optimum cut-off points has the better discrimination. Comparing CC and CC + OFC (summation) specifically at their optimal cut-off points, CC has a higher accuracy of 94% as against 93% for summation. CC and the summation of CC and OFC gave the best discrimination for birth weight less than 2000 g. They both had equal AUCs and the same accuracy of 92%.
ROC curve analysis for cut-off point determination
The corresponding ROC curves for CC, OFC, CC + OFC and CC/OFC ratio as surrogates for birth weight less than 2500 g are shown in Figures 3a to 3d. For CC, the identified cut-off point was 30.9 cm with a sensitivity of 91.4% and {1 -specificity} of 5.3%. The optimal cutoff point for OFC was 33.8 cm with a sensitivity of 84.4% and {1 -specificity} of 10.1%. With respect to sum of circumferences, the optimal cut-off was 64.9 cm with a sensitivity of 92.2% and {1 -specificity} of 6.5%. Also, CC/OFC ratio had an optimal cut-off point of 0.92 with a sensitivity of 87.0% and {1 -specificity} of 8.5% (Table 5).
The corresponding ROC curves for CC, OFC, CC + OFC and CC/OFC ratio as surrogates for birth weight less than 2000 g are shown in Figures 4a to 4d. For CC, the identified cut-off point was 29.6 cm with a sensitivity of 91.7% and {1 -specificity} of 8.0%. The optimal cutoff point for OFC was 32.8 cm with a sensitivity of 91.7% and {1 -specificity} of 5.9%. With respect to sum of circumferences, the optimal cut-off was 63.7 cm with a sensitivity of 91.7% and {1 -specificity} of 8.2%. Also, CC/OFC ratio had an optimal cut-off point of 0.91 with a sensitivity of 75.0% and {1 -specificity} of 10.5% (Table 6).
Discussion
Birth weight is an important screening tool for detecting the newborn at risk with special reference to LBW [1]. The LBW incidence of 14% in the current study is comparable to the estimated national average of 12% [27]. Detecting LBW is a challenge in developing countries because of unavailable or unreliable weighing scales and deliveries outside healthcare facilities [3,6,7]. This has led to the need for alternative measurements to assess newborns. In this current study involving babies of Igbo ethnic extraction form Nigeria, birth weight correlated very strongly with the anthropometric variables of CC, CC + OFC (sum) and strongly with OFC. CC demonstrated Figure 1 Scatter plots/regression lines of the different anthropometric parameters for birth weight <2500g. a: Scatter plot/regression line of birth weight (g) on chest circumference (cm) for newborns <2500 g. b: Scatter plot/regression line of birth weight (g) on occipitofrontal circumference for newborns <2500 g. c: Scatter plot/regression line of birth weight (g) on chest circumference + occipitofronal circumference (cm) for newborns <2500 g. d: Scatter plot/regression line of birth weight (g) on chest circumference (cc)/occipitofronal circumference (OFC) (ratio) for newborns <2500 g.
the best correlation with birth weight. This is similar to findings by Fawcus in Zimbabwe [28] respectively. It is also in keeping with findings from studies done in Asian countries which have reported good correlation between CC and birth weight ranging from 0.790 to 0.842 [19,20,29]. The high coefficient of correlation in the current study and the other studies cited above further reinforces the recommendation of the WHO collaborative study [6] to use CC as an alternative measurement for detection of low birth weight. This strong correlation between CC and birth weight may be due to the fact that there are no significant soft tissue changes occasioned by the delivery process for CC. OFC correlated well with birth weight, though not as strong as that of CC. Variations in the degree of moulding and oedema may be responsible for the lower correlation when compared with CC. These soft tissue changes differ from baby to baby depending on the circumstances of labour such as prolonged and obstructed labour [17]. Such variation may likely affect the correlation between OFC and birth weight.
The summation of OFC and CC had a strong correlation with birth weight, superior to OFC alone and approaching that of CC. However, the summation of OFC and CC as a surrogate to birth weight requires mathematical calculation and thus may offer no practical advantage over CC alone. No previous study on summation of OFC and CC for prediction of birth weight was found. Figure 2 Scatter plots/ regression lines of the different anthropometric parameters for birth weight <2000 g. a: Scatter plot/regression line of birth weight (g) on chest circumference (cm) for newborns <2000 g. b: Scatter plot/regression line of birth weight (g) on occipitofronal circumference (cm) for newborns <2000 g. c: Scatter plot/regression line of birth weight (g) on chest circumference + occipitofronal circumference (cm) for newborns <2000 g. d: Scatter plot/regression line of birth weight (g) on chest circumference (CC)/occipitofronal circumference (OFC) (ratio) for newborns <2000 g. CC/OFC ratio had the least correlation among all the parameters analysed in the current study. Furthermore, the ratio of the parameters gave a coefficient of determination (R 2 ) less than 0.5 which indicates that less than half of the variation in birth weight can be explained by CC/OFC ratio. Hence, there is no advantage in working out the ratio. It also requires calculation and may not be of much use to the semi-skilled labour attendant. No previous study was found on CC/OFC ratio for prediction of birth weight. The multiple regression model using CC and OFC as independent co-variables explains more variations that exists in birth weight than any of the other four models and is thus the most predictive formula for birth weight calculation. However, it may have limited application in the field because of the calculations involved.
A previous study in Nigeria revealed that Igbo babies have the highest birth weights of other ethnic groups in Nigeria [30]. When compared to figures from outside Nigeria, the cut-off point for LBW in the current study was higher than values obtained by Fawcus [28] in Zimbabwe who reported 30 cm. However it is similar to the value obtained by Moshen [17] in Egypt who reported a cut off point of 31 cm and those from Asia [20,31]. The findings of the current study and other studies from both Africa [17,28] and Asia [20,31] fall between a range of 29.0 cm and 31.0 cm. This range may be considered wide enough to highlight the challenge in adopting a universal cut off for LBW.
The mean birth weight obtained in the current study is somewhat higher than that of Ezeaka et al. [18] in Lagos and Swende [32] in Makurdi who reported lower values of 2890 g and 3080 g respectively. It is however lower than the 3200 g obtained by Patwari and colleagues [30] who studied only babies from privileged ROC curve for chest circumference as a surrogate for birth weight less than 2500 g. b: ROC curve for occipito-frontal circumference as a surrogate for birth weight less than 2500 g. c: ROC curve for chest circumference + occipitofrontal circumference as a surrogate for birth weight less than 2500 g. d: ROC curve for CC/OFC (ratio) as a surrogate for birth weight less than 2500 g. backgrounds in Maiduguri, a region with comparatively lower mean birth weight. When compared to figures reported from outside Nigeria, the mean birth weight found in the current study is substantially higher than 2364 g and 2866 g observed in India and Vietnam respectively [6]. On the other hand, it is smaller than 3300 g to 3650 g from North America and Europe [33,34]. The reason for the observed differences could range from racial and ethnic to socioeconomic factors.
Development of colour coded tapes for use by midwives and TBAs or family members will facilitate identification and referral of LBW newborns. Based on the cut-off points from this study, a colour coded tape can easily identify three weight groups. Those weighing more than 2500 g will fall within green, 2000 -2500 g will fall the yellow area, while less than 2000 will be red.
Conclusion
CC appears to be the best surrogate for detecting LBW infants. It is easy to measure and demonstrated the best correlation of all the parameters. This finding is in keeping with the WHO recommendation and should be encouraged in the rural areas and primary health care centres where weighing scales are likely not to be available or unreliable. Measuring tape is the only tool required and it is readily available, affordable and easily replaceable when damaged. Figure 4 ROC Curves of the different anthropometric parameters for birth weight <2000 g. a: ROC curve for chest circumference as a surrogate for birth weight less than 2000 g. b: ROC curve for occipitofrontal circumference as a surrogate for birth weight less than 2000 g. c: ROC curve for chest circumference + occipitofrontal circumference as a surrogate for birth weight less than 2000 g. d: ROC curve for chest circumference/occipitofronal circumference (ratio) as a surrogate for birth weight less than 2000 g. Table 6 Predictive performance of selected median cut-off points of CC, OFC, summation and ratio as surrogate indices for birth weight <2000 g | 2017-06-30T08:30:23.081Z | 2014-10-28T00:00:00.000 | {
"year": 2014,
"sha1": "21d77b9c7686ede8128f7c67516248631b9dc985",
"oa_license": "CCBY",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-014-0081-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1495a0a19254b4718e3e59f3a2895d55960c57a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235193701 | pes2o/s2orc | v3-fos-license | Model-based Optimization of Biopolymer Production from Glycerol
The present study focuses on sustainable production of biodegradable polymers by Cupriavidus necator DSMZ 545 using glycerol as substrate. The batch growth and biopolymer production kinetics were established in a 7-L bioreactor, which resulted in a total biomass of 8.88 g L–1 and poly(3-hydroxybutyrate) (PHB) accumulation of 6.76 g L–1. The batch kinetic and independently acquired substrate inhibition data were then used to develop a mathematical model for PHB production process. This was eventually used to design different nutrient feeding strategies under constant feed rate, decreasing feed rate, and pseudo steady state of substrate (glycerol) to optimize the PHB production during fed-batch cultivation. Among all the fed-batch cultivation strategies, the highest PHB accumulation and productivity of 13.12 g L–1 and 0.27 g L–1 h–1, respectively, was achieved in fed-batch bioreactor cultivation where a pseudo steady state with respect to glycerol was maintained.
Introduction
PHAs (polyhydroxyalkanoates) have received considerable attention as a substitute for synthetic polymers. Not only do they possess properties similar to conventional petrochemistry-derived plastics, but are also biodegradable in nature. However, the major abeyance in the large-scale production of biopolymers is the cost of PHA production of PHB, which is currently much higher than conventional plastic, thereby making them less popular than their counterpart 1 . This is primarily due to high substrate cost, low concentration of PHB in the growing cells, low rate of PHB accumulation, and expensive recovery protocols of the PHB. One way to reduce the overall cost of PHB production process is to use cheaper substrates (e.g., glycerol, which is a by-product of the biodiesel industry), which can be coupled with highly efficient isolation and purification protocols for PHB to economize the production cost further.
The major aim of the present study was, therefore, to investigate the use of glycerol as a renewable raw material for the fermentative production of PHB. With the increasing global interest in biofuel production, it was considered interesting to examine the availability of this industrial by-product (glycerol) of Jatropha biofuel industry for the microbial PHB production. Some reports [2][3][4] have indicated that Cupriavidus necator is able to consume glycerol and accumulate biopolymer (PHB) under specific cultivation conditions (excess availability of substrate and limiting nitrogen concentrations). Apart from this, optimization of process parameters (physical and chemical) [5][6][7] and bioprocess engineering strategies [8][9][10] are some other approaches generally employed to address this problem and achieve cost-effective PHB production. In the present studies, mutant strain of C. necator DSMZ 545 (of older wild type C. necator DSMZ 529) was used. Thus, the main aim was to optimize the medium recipe and develop a simple mathematical model, which not only satisfactorily describes the observed batch kinetics of cultivation, but also explicates substrate inhibition and limitation under different nutrient feed conditions of fed-batch cultivations.
Different nutrient feeding strategies have been implemented by different researchers during the fed-batch cultivation for process improvement with respect to PHB accumulation and/or productivity for other culture systems [11][12][13][14] . However, to date there are only very few reports on kinetic analysis of PHB production by C. necator [15][16][17][18] and particularly none on the use of mathematical model for the design of nutrient feeding of fed-batch cultivation strategies for growth associated PHB production us-ing C. necator DSMZ 545. Hence, the present work highly advocates that mathematical models could be an excellent tool to understand the system behaviour and help in the design of cultivation strategies for process optimization with minimum experimentation 19 . In addition, since none of the literature reports on C. necator has employed the use of mathematical model for the design of nutrient feeding strategy(ies) in the fed-batch cultivation, this can be considered as the steppingstone for employment of simple and logical approaches of fed-batch cultivations and their optimizations. This would significantly minimize the experimentation to increase the yield and productivity of PHB accumulation. The developed mathematical model could also serve as a useful tool to design appropriate reactor operation strategies to optimize the PHB concentration and/or productivity 19 . The present investigation thus focused on economical biopolymer production for societal applications using engineering optimization tools.
Microorganism and maintenance
The strain Cupriavidus necator DSMZ 545 was procured from German Collection of Microorganisms and Cell Cultures (DSMZ, Germany) and activated in Luria-Bertani Media (HI Media, India) by incubating at 30 °C for 48 h. Thereafter, the cells were grown and maintained on LB Agar plates/ slants at 30 °C for 48 h, and then stored at 4 °C. These were then subcultured monthly to maintain the viability of the organism.
Culture media
No literature reports were available on optimized medium recipes for utilization of glycerol by C. necator. An attempt was, therefore, made to determine the highest concentrations of the key nutrients commonly used for cultivation of C. necator from existing literature studies 2,20-25 which formed the basis for statistical optimization in the present studies. The concentrations of these key nutrients were then varied by ±20 % to determine the range (low/high concentrations) to be used for identification of the statistically optimized media as described below: 40 g L -1 (pure) glycerol, 2 g L -1 (NH 4 ) 2 SO 4 , 1.5 g L -1 KH 2 PO 4, 3.5 g L -1 Na 2 HPO 4, 0.2 g L -1 Mg-SO 4 •7H 2 O, 0.02 g L -1 CaCl 2 •2H 2 O, 0.058 g L -1 Ammonium Fe(III) citrate, and 1 mL trace element solution (TES). TES consisted of 0.1 g L -1 Zn-SO 4 •7H 2 O, 0.03 g L -1 MnCl 2 •4H 2 O, 0.30 g L -1 H 3 BO 3 , 0.2 g L -1 CoCl 2 •6H 2 O, 0.02 g L -1 NiCl 2 •6H 2 O, and 0.03 g L -1 Na 2 MoO 4 •2H 2 O. Sulphates and phos-phates in the medium were autoclaved in separate flasks to prevent the precipitation of medium components during sterilization; ammonium Fe(III) citrate and TES were only filtered sterilized using 0.22 µm syringe filter (Millipore, Ireland). All the sterilized medium components were then mixed aseptically in the laminar hood chamber. All culture media throughout the experiments were adjusted to a pH value of 6.8 and incubated at 30 °C 26 .
Inoculum development
Two loops of actively growing colonies on Luria Broth (LB) agar plates were transferred aseptically to 100-mL flask containing 20 mL sterile LB broth. These cultures were incubated overnight at 30 °C and later transferred to 50 mL of sterile medium in a 250-mL flask with 5 % v/v of the inoculum to ensure the viability of the cells.
The inoculum for shake flask and/or bioreactor cultivation was developed according to the aforementioned protocol to ensure reproducibility in repetitive cultivations. For chemically defined media, the concentration of major limiting nutrient, glycerol, was enhanced gradually from 10 g L -1 to 20 g L -1 in a stepwise manner (while the other medium components were kept the same), and then finally to statistically optimized concentration of 40 g L -1 in the final shake flask to let the cells slowly adapt to high glycerol concentration. An amount of 200 m L -1 of inoculum culture of C. necator (5 % of working volume) was then used to inoculate the bioreactors.
Selection of significant effectors-Plackett Burman Design
Plackett Burman (PB) experimental design was used as an initial screening tool for identifying significant process variables affecting the key responses, namely, biomass and PHB accumulation. Seven factors -glycerol, (NH 4 ) 2 SO 4 , KH 2 PO 4 , Mg-SO 4 •7H 2 O, Na 2 HPO 4 , CaCl 2 •2H 2 O, and TES were selected as the key effectors for growth and PHB production by C. necator (Table 1). Highest concentration ranges of above effectors were selected from the extensive literature studies available 13,14,[22][23][24][25] for PHB fermentation process by C. necator (formerly known as Waustersia eutropha or Ralstonia eutropha). These nutrients were then varied by ±20 % to determine the range of statistical optimization protocol. Two concentration levels, high (+1) and low (-1) were thus identified, and their impact on the responses, biomass, and PHB formation was examined. A set of twelve experiments were formulated using Stat-Ease Design Expert Software (Design Expert 5.0.9, Stat Ease Inc., MN, USA). These trial experiments were then performed in the shake flask, and responses (biomass and PHB) were as-sessed. Statistical analysis of the 12 trial experiments and their responses (biomass and PHB) by the Stat-Ease software yielded t-coefficient values for all seven effectors. Those medium components which yielded highest t-coefficient values were then selected for Response Surface Methodology (RSM) studies to obtain their optimum concentrations.
Response Surface Methodology for optimization of concentrations of effectors
Response Surface Method (RSM) was utilized to optimize the effect of varying concentration of the effectors in detail on output responses (biomass and PHB). The effect of changing the concentrations of each of the three critical effectors (screened by the experimental design of Plackett-Burman) on the overall responses (biomass and PHB) was studied in detail using Central Composite Design (CCD), and a suitably identified statistical model (describing linear squared and interactive effects of these effectors) was then developed. The 2 3 -Factorial CCD was formulated using Design Expert (5.0.9) software (Stat-Ease Corporation, USA), which led to a total number of 20 trial experiments (Table 4). These were then performed in shake flask to optimize the concentration of those selected effectors having high 't'-values. The concentration of these nutrients and their responses were used for nonlinear regression analysis and identification of model parameters of responses (biomass and PHB) as described later. A special feature of the software, point-prediction, was used thereafter to identify the final optimized values of the effectors.
Preliminary substrate inhibition studies
To investigate the effect of limiting substrate(s) (glycerol and nitrogen) on the growth of C. necator, substrate inhibition studies were conducted separately in 1-L shake flasks with 250 mL optimized media by changing the concentrations of aforementioned key limiting nutrients, while keeping the concentrations of the rest of the medium components at their optimal value. The effect of the increase in the substrate (glycerol concentration) from 5 g L -1 to 100 g L -1 on C. necator growth was observed while keeping the rest of the medium components at optimum value during shake flask cultivation by monitoring the OD 600nm (Optical Density) during the initial growth phase. Samples were withdrawn at 2-h intervals and analyzed for biomass. Similarly, the effect of increase in nitrogen concentration from 0.5 g L -1 to 13 g L -1 on the growth of C. necator was monitored while keeping the glycerol concentration constant at 40 g L -1 . The maximum specific growth rate (µ max ) at different times for substrate inhibition studies was calculated as slope of a graph between lnX vs time with respect to glycerol/ nitrogen concentrations in the exponential growth phase.
Study of batch growth and PHB production kinetics of C. necator in 7-L bioreactor using statistically optimized media Batch growth and PHB accumulation studies of C. necator were done in a 7-L stirred tank double-jacketed bioreactor equipped with two six-flatblade impellers (Applikon Dependable Instruments, The Netherlands) and three baffles with pH, temperature, dissolved oxygen sensor, and ADI 1025 Controller containing 4-L optimized medium recipe as described in the section "Culture media". The dissolved oxygen (DO) concentration inside the bioreactor was measured by Applisens dissolved oxygen probe (Applikon Dependable Instruments, The Netherlands), and its concentration was maintained above 30 % by manually adjusting the speed of the agitator and/or flow rate of sterile air in the bioreactor. The temperature was maintained at 30 °C by circulating constant-temperature water in the jacket of the bioreactor through Chilled Water Circulator (Julabo FP50, Germany). Culture pH was maintained at 6.8 by automatic addition of 2N HCl/2N NaOH solution through ADI pH controller unit. Samples were withdrawn at intervals of 3 h, and analyzed for biomass, residual glycerol, nitrogen concentration, and PHB content. The batch bioreactor experiments were performed for 48 h (in triplicate) and average values of process variables (X, S, P) are reported.
Development of batch mathematical model
A mathematical model was developed using the batch growth and PHB accumulation kinetics obtained from bioreactor experiments. The independently acquired data obtained from substrate inhibition (glycerol and nitrogen) experiments were also used for the development of the mathematical model.
The following assumptions were made for the development of the mathematical model: -Glycerol and nitrogen were the only limiting (substrates) affecting the growth and PHB production.
-The rest of the medium components were available in excess during the entire fermentation.
-The temperature (30 °C) and pH (6.8) of the culture broth was maintained constant throughout the course of cultivation.
Nonlinear regression technique as proposed by the original algorithm of Rosenbrock 27 , and the computer programs and methodology described by Volesky and Votruba 28 were then used to identify the optimized values of model parameters. This technique minimizes the differences between the experimental data points and corresponding model simulations for different process variables at different times to carefully define the objective function SSWR (explained in Equation 11).
Development of model-based nutrient(s) feeding strategy(ies) in fed-batch fermentation
From substrate inhibition studies it was established that high concentrations of the major nutrients (glycerol and nitrogen) inhibit the microbial growth and product accumulation, thereby indicating that only slow feeding of the nutrients in the bioreactor (fed-batch cultivation) would eliminate the substrate inhibition problem, and be a better choice for obtaining significantly high biomass with increased PHB accumulation and productivity in the bioreactor. Therefore, the mathematical model equations (Equations 1-10) were extrapolated to simulate fed-batch cultivations by taking overall mass balance around the bioreactor and incorporating the dilution terms in the model equations (Equations [12][13][14][15][16][17]. This fed-batch mathematical model was then used to generate several offline computer simulations for different nutrient(s) feeding strategies as described further.
Fed-batch cultivation strategy
The bioreactor cultivation was initiated as a batch in a 7-L bioreactor (working volume 4-L) containing optimized media (initial glycerol concentration of 40 g L -1 ). When the culture was in exponential growth phase, constant feeding of glycerol (175 g L -1 ) and nitrogen (2.5 g L -1 ) at 100 mL h -1 was initialized at 15 h (as identified by model simulations). The constant feeding of substrates glycerol and nitrogen were continued for 20 h (15 h -35 h) for maintenance of culture growth (as predicted by the model). After the nutrient feeding was stopped, the (secondary) batch cultivation was resumed further until 48 h for the consumption of the residual substrates (glycerol and nitrogen) in the bioreactor. For maintenance of pseudo steady state with respect to substrate (glycerol), variable feeding of glycerol at 200 g L -1 concentration (along with proportionately increased concentrations of other medium components) had started at 20 h and continued until 40 h of cultivation to ensure availability of a constant non-limiting and non-inhibitory concentration of (key) substrate glycerol concentration (as identified by model) during the fed-batch cultivation period. Secondary batch fermentation was then performed to consume the residual glycerol in the bioreactor. For decreasing feed rate, feeding of both glycerol (200 g L -1 ) and nitrogen (3.5 g L -1 ) was initiated at 16 h at a feed rate of 75 mL h -1 . The feeding rate thereafter was reduced to 55 mL h -1 after 24 h of cultivation. Feeding of substrate was then continued at this flow rate for another 8 h, and thereafter the flow rate was further reduced to 35 mL h -1 , which was then continued until the end of fermentation. The guidance for gradual decrease of nutrient feed rate emerged from several off-line model simulations on computer with the main aim that the need of secondary batch cultivation may be eliminated, and it should be possible to harvest the bioreactor immediately after the termination of the nutrient feed to reduce total cultivation time and eventually enhance the biomass/PHB accumulation.
Analytical methods
Optical density of appropriately diluted culture broth was measured by spectrophotometer (OPTI-ZEN model 3220UV, Mecasys, Korea) at 600 nm against the medium blank. An amount of 30 mL of the fermentation broth samples were withdrawn from the bioreactor every three hours. This was then centrifuged (Centrifuge 5810 R, Eppendorf India Limited, India) at 9,000 g for 15 min at a temperature of 4 °C, and the supernatant was used for the analysis of residual nutrients concentration. Glycerol was analyzed by high performance liquid chromatography (HPLC) (Waters 515), and separation was achieved using Phenomenex Rezex RCM-Monosaccharide Ca 2+ (8 %) column (Column Dimensions: 300 x 7.8 mm ID; Elution Type: Isocratic; Eluent: Water; Flow Rate: 0.5 mL min -1 ; Col. Temp.: 60 °C; Refractive Index (RI) detector). The Kjeldahl method was used for the analysis of residual ammonia nitrogen 29 . The cell pellet left in the centrifuge tube after centrifugation was dried at 90 °C in a hot air oven, and CDM (cell dry mass) was calculated. PHB concentration was quantified by gas chromatography (GC 2010 Shimadzu Co., Japan) using benzoic acid as an internal standard 30,31 .
Results and discussion
Statistical optimization of nutrients for medium recipe The initial screening of the nutrients (effectors) on the desired responses (biomass and PHB) was performed according to the Plackett-Burman protocol, which helped in prioritizing the effectors affecting the overall response. Each nutrient effector was evaluated at two concentration levels (as shown in Table 1), high (+1) and low (-1), as per the guidance available from the literature studies [12][13][14][22][23][24][25] . A nutrient recipe consisting of 12 experiments was designed by the software. Table 2 highlights the concentration distribution of these factors in the trial experiment according to the Design Expert software and their responses in the study. From Table 2, it can be concluded that the high concentration of glycerol can affect biomass and PHB differently. Table 3 summarizes the statistical 't'-value coefficients for the different effectors (A-G) for the experimental study highlighting the significant effect of any effector on the responses (biomass and PHB). As may be seen from Table 3, the value of 't" for nutrients glycerol, (NH 4 ) 2 SO 4 , and TES were positive (in increasing order), and therefore these were the major nutrients (critical effectors) affecting the biomass and PHB formation by C. necator. This was an obvious conclusion also since carbon and nitrogen are the two most important factors that play a key role in PHB accumulation, which normally takes place under excess availability of carbon source and low (limiting) concentrations of nitrogen and/or oxygen 32 . Thus, by optimizing these two effectors were expected to yield larger values of concentration and productivity of PHB. Trace element solution (TES), which comprises a number of different micronutrients, is also essentially required for maintenance of protein structure, and functioning of key biosynthetic enzymes. Thus, these three factors, screened by the experimental design of Plackett-Burman, were then subjected to further analysis of Response Surface Methodology (RSM) to identify the appropriate concentration of selectively identified key nutrients. The analysis identifies the model parameters of equations (i) and (ii) by non-linear regression between concentrations of effectors of trial experiments with the corresponding concentrations of effectors by the solution of the model. Based on the Plackett-Burman design, three factors (glycerol, (NH 4 ) 2 SO 4 , and TES) that showed positive influence on growth and PHB were selected, and CCD was used to determine the optimum levels of these parameters. A total of 20 experimental runs with different combinations of glycerol (A), (NH 4 ) 2 SO 4 (B), and TES (C) were performed (Table 4), and the responses with respect to biomass and PHB were established. The results were analyzed by Design Expert Software and the following quadratic regression equations were obtained in terms of the selected variables for growth and PHB production: The equations (i) and (ii) in terms of coded factors can be used to make predictions of the responses for any given concentration levels of different effector(s). The model demonstrated an adequate precision of 4.65 with respect to biomass, and 4.70 with respect to PHB. Table 5 presents the design matrix evaluation for Response Surface Quadratic Model, which shows that the model will adequately predict the responses within the design space. Thus, it exhibited that the model is a good predictor of the responses. Optimized concentration of the medium components was established by examining point prediction feature of the Design Expert Software. The point prediction feature of the software allows the user to vary the concentration of different effectors at discrete levels with multiple combinations to probe their effect on the predicted responses in zero time. Point prediction uses the models fit during analysis on the factors to compute the point prediction and interval estimates. The predicted values are updated as the levels are changed. A maximum biomass and PHB concentration of 3.7 g L -1 and 1.36 g L -1 , respectively, were obtained by the statistical optimization protocol at the following optimized values of medium components: 40 g L -1 of glycerol, 2 g L -1 of (NH 4 ) 2 SO 4 and 1 mL L -1 of TES.
Substrate (glycerol and nitrogen) inhibition studies
The effect of increasing glycerol concentration (from 5 to 100 g L -1 ) on C. necator was assessed during shake flask cultivations, where the rest of the medium components were maintained at a constant level in the growth medium. The specific growth rate (µ) increased gradually as the concentration of glycerol was increased from 5 g L -1 to 25 g L -1 until a maximum value of specific growth rate (µ max = 0.53 h -1 ) was observed at 25 g L -1 (Fig. 1). This may be primarily due to substrate limitation; thereafter inhibition of culture growth and decrease in specific growth rate of C. necator was observed when the glycerol concentration was further increased beyond 25 g L -1 . The specific growth rate fell sharply to a value of 0.18 h -1 upon growth of C. necator at 60 g L -1 glycerol concentration. Specific growth rate continued to decrease until it reached 0.013 h -1 (almost zero) at a concentration of 100 g L -1 . Therefore, 100 g L -1 was considered as highest concentration at which complete growth inhibition sets in. This was considered as critical glycerol concentration (S m ) at which almost complete cessation of growth occurs.
Similarly, experiments were conducted by varying initial concentrations of nitrogen (as ammonium sulphate) to establish its effect on the growth of C. necator. Varying initial nitrogen concentrations (0.5-13 g L -1 ) were taken in the medium, which
S.no Effectors (A-C) Responses
A (g L -1 ) B (g L -1 ) C (mL L -1 ) Biomass (g L -1 ) PHB (g L -1 ) contained 40 g L -1 glycerol (constant) as the carbon source. A maximum specific growth rate (µ max ) of 0.24 h -1 was obtained at nitrogen concentration of 2 g L -1 ammonium sulphate, while complete inhibition of culture growth was observed at nitrogen concentration of 13 g L -1 , as shown in Fig. 2. These effects of initial glycerol and/or nitrogen concentration with respect to specific growth rate helped in the identification of appropriate substrate inhibition constants of the model equations for the batch mathematical model for growth and PHB production as described later.
Batch kinetics on C. necator using statistically optimized media Growth kinetics of the C. necator was then studied in a lab-scale bioreactor (7 L) under controlled pH and temperature of 6.8, and 30 °C, respectively. Fig. 3 demonstrates the time course of batch cultivation of C. necator for PHB production. The culture exhibited an initial lag phase of around 9 h, after which it featured exponential growth. Batch fermentation featured overall accumulation of 8.88 g L -1 biomass, and 6.76 g L -1 PHB concentration in 42 h of cultivation period, thereby resulting in a maximum PHB productivity of 0.16 g L -1 .
Proposal of mathematical model and assessment of model parameters
Differential mass balance equations were used for the description of observed batch fermentation kinetics of microbial growth, substrate consumption, and PHB accumulation, as described below: Equation (2) describes the biomass formation rate (dX/dt), which featured limitation by two major nutrients, glycerol (S 1 ) and nitrogen (S 2 ) featuring applicability of Monod kinetics with respect to key substrate glycerol and Sigmoidal kinetics with respect to minor nutrient nitrogen, respectively.
From the preliminary experiments on the growth inhibition by increasing initial substrate (glycerol) concentration (Fig. 1), it was indicated that the specific growth rate started to decrease after a particular concentration of glycerol (above 25 g L -1 ). However, almost complete culture growth inhibition was recorded only at a glycerol concentration (S m ) of 100 g L -1 . This indicated that an empirical correlation proposed by Luong 33 describing the inhibition kinetics of culture growth substrate (glycerol) was more appropriate to describe the observed experimental inhibition pattern by the glycerol, as given below: Similarly, culture growth inhibition studies with respect to nitrogen exhibited a slow decrease in specific growth rate, followed by its decrease to zero with increasing initial nitrogen concentration, as shown in Fig. 2. The inhibition of culture growth by increasing concentrations of nitrogen was also described by an empirical correlation proposed by Luong 33 .
The differential mass balance equation for culture growth by incorporating the substrate inhibition terms was described as follows:
-Batch kinetic data of total biomass, nutrient consumption, and PHB production for C. necator in a 7-L bioreactor (glycerol concentration -circle, biomass concentration -triangle, PHB concentration -square, nitrogen concentration -cross)
Specific rate of glycerol consumption (q s 1 ) was described by Eq. 7, as follows: The following equation represents the specific nitrogen consumption rate (q s 2 ): Y represents the yield of biomass with respect to nitrogen, and 2 S m is the maintenance energy requirement of the cell on nitrogen. The product formation was observed during growth phase as well for non-growth phase, therefore specific rate of product formation (q P ) was adequately described by the growth-associated component and non-growthassociated component as follows: where k 1 and k 2 represent the growth-associated and non-growth-associated product formation constants, respectively. Hence, Eqs. (5, 7, 8, and 10) represent the batch mathematical model equations for growth, substrate consumption, and PHB accumulation by C. necator.
Estimation of model parameters
The optimized values of the model parameters (Table 6) were determined by minimizing the difference between the experimental data points and corresponding model simulations using a non-linear regression technique 27,34 developed by computer program 28 . For the estimation of model parameters, a system of differential equations, Eqs. (5, 7, 8, and 10), was solved using a numerical integration program based on Runge-Kutta method of 4 th order. Thereafter, the search for the minimum of the mul-tivariable objective function (SSWR) was performed by the original algorithm of Rosenbrock 27 , as described further, and which was extensively used before by several researchers: where, -SSWR describes the sum of the square of the weighted residues -i reflects the data point and has the limit of 1 to n, while j describes the process variable and has the limit of 1 to m -W j = the weight of each variable (normally taken as the maximum value of each process variable) to normalize the error between experimental data points and model simulation. -Δ ij = difference between the model simulated process variable at a particular data point and corresponding experimental data point (y model -y expt ). Fig. 4 shows the comparison of the model simulation and experimental data points wherein a good agreement between the two is clearly reflected. The developed model was able to describe successfully the experimental batch kinetics of C. necator. In the present investigation, the nutrient feedings were designed to ensure non-growth phase of cultivation, characterized by excess availability of major substrate (glycerol) and limiting concentration of nitrogen to facilitate enhanced PHB accumulation. Several factors, including limitation of the fresh key (major) nutrients feed, its optimal concen-tration, on/off time, and its rate of addition play an important role in successful design of fed-batch cultivations. Thus, the developed batch mathematical model can be utilized as a tool to simulate nutrient feeding strategies of key limiting nutrients (carbon and nitrogen) to yield highest product concentration with minimal unconverted substrate at the end of fermentation. The batch model equations were extrapolated to describe the fed-batch model, as follows:
F i g . 4 -Comparison of model simulations (smooth lines) and experimental values (data points) of batch fermentation kinetics of C.
necator. Data points (• Glycerol, × Nitrogen, ■ PHB, ▲ Biomass) represent the average values of the samples (triplicate).
Fed-batch cultivation with constant feed rate
With an aim to improve the PHB concentration and productivity over the batch cultivation, the developed model was used to simulate the fed-batch strategy with constant feed rate offline and was later experimentally implemented. Initially, cultivation of C. necator was conducted in batch mode in a 7-L bioreactor (4-L working volume) with statistically optimized medium recipe. When the culture was in an active growing stage, constant feeding of glycerol (175 g L -1 ) and nitrogen (2.5 g L -1 ) at 100 mL h -1 was initiated, as shown in Fig. 5, keeping other medium nutrients at their optimized value. The feeding was continued for 20 h in order to sustain the exponential growth of the culture. At 35 h, the reactor was again operated in batch mode (secondary batch) for the complete consumption of residual substrates. The strategy proved extremely advantageous be-cause the limiting (disappearing) nutrient availability at the time when the biomass concentration was extremely high (at hour 15) was overcome by addition of constant feed of glycerol and nitrogen, which eventually featured increased rates of biomass and product accumulation, and better glycerol consumption. This model-designed nutrient feeding strategy ensured a reasonably high glycerol and limiting nitrogen availability, which ensured PHB accumulation in the latter phase of the cultivation. A maximum biomass of 18.79 g L -1 and PHB accumulation of 11.37 g L -1 (60 % of CDM) was obtained experimentally in 48 h in the fed-batch cultivation, which was in close proximity to the model-predicted values. Fig. 5 describes the different experimental data points and the corresponding model simulations (smooth lines) for the fed-batch cultivation under constant feed rate, as identified above. This fedbatch cultivation strategy demonstrated a significant where "D" represents the dilution rate= F/V. F = total flow rate F 1 = flow rate for glycerol F 2 = flow rate for nitrogen V = working volume of bioreactor S 01 and S 02 are inlet concentrations of substrate(s) glycerol and nitrogen, respectively, in the feed reservoir.
The different fed-batch cultivation strategies used in the present investigation are described further. improvement in PHB productivity (0.23 g L -1 h -1 ) as opposed to 0.16 g L -1 h -1 observed during batch cultivation.
Fed-batch cultivation at pseudo steady state
Another fed-batch cultivation strategy of pseudo steady state was simulated and experimentally implemented. A large number of off-line simulations was carried out to identify the glycerol concentration such that its constant concentration (pseudo steady state) was maintained inside the reactor, so that neither limitation nor inhibition of C. necator would occur. The model-based cultivation was initiated as a batch, and when the culture was actively growing and the residual glycerol concentration was reduced to 19.13 g L -1 , variable feeding of glycerol at 200 g L -1 (along with proportionately increased concentrations of other medium components) was started at 20 h and fed until 40 h of cultivation. At 40 h, the mathematical model was again utilized to simulate secondary batch fermentation so that the higher concentration of accumulated residual glycerol is consumed completely before the termination of the experiment. For this fed-batch cultivation, the model predicted an overall biomass of 29.94 g L -1 and PHB accumulation of 13.84 g L -1 . Fig. 6 shows the comparison of experimental (data points) along with the corresponding model simulations (smooth lines) for the fed-batch cultivation featuring the pseudo steady state with respect to glycerol for 20-40 hours. Reasonably high biomass concentration of 24.44 g L -1 and PHB accumulation of 13.12 g L -1 (53 % of CDM) was obtained experimentally in 48 h of cultivation of C. necator. This cultivation strategy exhibited major improvement in PHB productivity (0.27 g L -1 h -1 ) as opposed to 0.16 g L -1 h -1 obtained during batch cultivation.
Fed-batch cultivation with decreasing feed rate
In the fed-batch cultivation strategies, it was observed that there was a need of secondary batch cultivations to consume the unconverted glycerol when the nutrients feed was completed at full reactor volume condition. Therefore, it was considered necessary to design the nutrient feeding strategy in such a way that eliminates the need of secondary batch cultivations, where termination of feeding of substrate (glycerol) coincides with the end of fermentation. Out of several off-line computer simulations of the model, one such model-simulated fedbatch strategy was experimentally implemented where the batch cultivation lasted for 16 h, and thereafter feeding of substrate (200 g L -1 glycerol and 3.5 g L -1 of nitrogen) was continued until 24 h at a feed rate of 75 mL h -1 . The feeding rate was then reduced to 55 mL h -1 after 24 h of cultivation, and was then continued for another 8 h; thereafter, the flow rate was further reduced to 35 mL h -1 until the end of fermentation (48 hours). Design of such a strategy predicted 20.57 g L -1 of biomass and an Fig. 7 describes a comparison of model simulation and experimental observation of fed-batch cultivation, when a decreasing feed rate strategy was implemented in the bioreactor. This fedbatch cultivation strategy also resulted in an increase in overall productivity of PHB to 0.23 g L -1 h -1 as opposed to 0.16 g L -1 h -1 obtained during batch cultivation. Figs. 5, 6, and 7 clearly suggest that the experimental process variables were matching with the model simulations for almost the entire cultivation period. This, along with Table 8, demonstrated the validity of the mathematical model particularly during highly dynamic fed-batch cultivation conditions, as well as established its use for enhancing the productivity for PHB accumulation.
Discussion
To date, there are very few reports on PHB production by C. necator 15,17,18,20,21,36,37 and none on the use of mathematical models for the design of nutrient feeding strategies for growth-associated PHB production using C. necator DSMZ 545 on glycerol.
It has been invariably observed that during batch cultivation, limitation of essential nutrients (carbon) occurs during a major part of cultivation of microbial cells, which significantly hinders growth of C. necator and accumulation of PHB. For keeping high growth and product accumulation rates, a fresh supply of nutrient(s) is necessary at appropriate times. Therefore, a set of model equations was simulated by a computer program, and the different fed-batch cultivation strategies were designed by varying the carbon and nitrogen concentrations in the feed at different time intervals. Table 7 shows some of the best possible strategies demonstrating high biomass and PHB accumulation in the present study, and Table 8 shows predicted and experimental data of fed-batch cultivation of C. necator. The main objective of all fed-batch simulations was to ensure high biomass with maximum PHB accumulation at the end of the fermentation. Among all experimented strategies, the best results were obtained with pseudo steady state of glycerol wherein maximum PHB accumulation of 13.12 g L -1 in 7-L bioreactor was observed. The developed model predicted high PHB accumulation, which was experimentally verified with minimum experimental efforts by cultivation of C. necator on glycerol, thus demonstrating the high predictive power of the mathematical model for enhanced PHB production. Different researchers have also implemented different nutrient feeding strategies during the fed-batch cultivation for process improvement with respect to PHB accumulation and/or productivity. Fed-batch cultivation of A. latus ATCC 29713 was used for PHB accumulation wherein the effect of constant rate feeding, exponentially increasing feeding rate, and pH stat fed batch cultures were examined on the maximum PHB accumulation. It was possible to accumulate 18.2 g L -1 PHB under pHstat fed batch cultivation. In addition, the distinct capability of the mathematical model to successfully predict highly dynamic fed-batch cultivation strategies was demonstrated by their experimental implementation 14 . A significantly high PHB concentration of 22.65 g L -1 and an overall PHB content of 76 % was achieved during constant feed rate fedbatch cultivation by using the model-based cultivation. This was the highest PHB content reported so far using Azohydromonas australica. Hence, the present work further demonstrated that mathematical models are excellent tools for understanding the culture behaviour without extensive trial experiments, as well as help greatly in the design of bioreactor cultivation strategies for process optimization with minimum experiments. The scope of the present mathematical model can be further enhanced by making it pH-and temperature-sensitive, and used for the design of more complex fed-batch/ continuous cultivation strategies for over production of PHB by C. necator.
Conclusions
In the present study, glycerol, an inexpensive carbon source, was used for the production of PHB. The methodology featured statistical media optimization for the cost-effective production of PHB by C. necator. Thereafter, mathematical model for PHB production was developed using batch kinetic data and culture growth inhibition data of C. necator. The developed batch kinetic model was extrapolated to fed-batch cultivations, and used for the design of different nutrient feeding strategies for high PHB accumulation. Different model-based fed-batch cultivation strategies were then experimentally implemented. Among all cultivation strategies, maximum PHB accumulation and productivity of 13.12 g L -1 and 0.27 g L -1 h -1 , respectively, were obtained when the fed-batch was carried out under maintenance of pseudo steady state with respect to substrate (glycerol) for a major period of cultivation. This strategy featured maintenance of high substrate availability and limiting concentrations of nitrogen, which led to high intracellular PHB accumulation. The manuscript summarizes a comprehensive engineering optimization strategy for improvement of productivity of PHB accumulation, which involves maintenance of specific nutrient availabilities (high or low) during the cultivation. The methodology adopted in this investigation is system-independent, and can be applied to other cultivation systems for process optimization in minimum trial and error experiments.
AckNowlEDGEmENt
The authors would like to acknowledge the financial support from the Department of Biotechnology (DBT), India for establishing a "Centre of Excellence" for biopolymer production from renewable resources at IIT Delhi.
COnFLICT OF InTEREST
The authors declare that they have no conflict of interest.
Batch cultivation
Fed-batch at constant feed rate K S1 -saturation constant for glycerol consumption, g L -1 K S2 -saturation constant for nitrogen consumption, g L -1 a 1 -exponent indicating type of relationship between S 1 (glycerol) and µ a 2 -exponent indicating type of relationship between S 2 (nitrogen) and µ S m1 -critical glycerol concentration at which complete inhibition occurs S m2 -critical nitrogen concentration at which complete inhibition occurs 1 X S Y -yield with respect to glycerol, g g -1 2 X S Y -yield with respect to nitrogen, g g -1 1 S m -maintenance energy requirement of the cell on glycerol 2 S m -maintenance energy requirement of the cell on nitrogen q S1 -specific rate of glycerol consumption, h -1 q S2 -specific rate of nitrogen consumption, h -1 q P -specific rate of product formation, h -1 K 1 -growth-associated product formation constant, g g -1 K 2 -non-growth-associated product formation constant, h -1 S 1 -glycerol concentration, g L -1 S 2 -nitrogen concentration, g L -1 X -biomass concentration, g L -1 G r e e k s y m b o l s µ max -maximum specific growth rate, h -1 µ -specific growth rate, h -1 | 2021-05-25T23:30:43.097Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "7440b519d2ef1a44259879fcb8d17b1ef9fbcb13",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15255/cabeq.2020.1864",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7440b519d2ef1a44259879fcb8d17b1ef9fbcb13",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
252616231 | pes2o/s2orc | v3-fos-license | Use of OER and Interactive Online Learning in an Introductory Financial Accounting MOOC
The paper describes the development process, present status, and research plans for Introductory Financial Accounting , a free Massively Open Online Course offered by Athabasca University. The course was launched in September 2021 to reduce educational costs for students, provide a pathway to university credit, and encourage more students to consider the institution’s Bachelor of Commerce program. The course allows students to explore the field of accounting without pressure or financial risk, and at their own pace. The instructional material used is an open educational resource, Introduction to Financial Accounting (Based on International Financial Reporting Standards). Learning material for each unit is integrated with sophisticated interactive online exercises and tailored feedback. There are also auto-graded quizzes at the end of each of five modules that can be attempted as often as desired. Learners can review material and receive feedback in a non-threatening learning environment. Discussion forums are available for student-student interactions. These are also monitored on a volunteer basis by an academic expert. A frequently-asked question database is being expanded as the project proceeds. A total of five badges are awarded to learners as they progress. This is automatically tracked and communicated via email. When learners complete 75% of the material, they can purchase a certificate of completion and attempt an online invigilated exam if desired. If the exam is successfully completed, credit can be awarded for ACCT 253: Introductory Financial Accounting. This is a required 3-credit course in Athabasca University’s Bachelor of Commerce program. The paper also describes a proposed research program. This will correlate social presence indicators in three different types of discussion forums with relative success and persistence measures.
Use of OER and Interactive Online Learning in an Introductory Financial Accounting MOOC
Introductory Financial Accounting is a free MOOC (Massive Open Online Course) offered by PowerED, a for-profit learning unit at Athabasca University. The course opened in September 2021. It may be classified as an xMOOC, or eXtended Massive Open Online Course. For instance, the learning process is highly-structured and the path through the course is specified. The MOOC is divided into five modules. These cover the accounting cycle, how to analyze and record financial transactions, and how to report those financial transactions in financial statements. Students also learn how to interpret financial statements using ratio analysis and explore the basic technical skills needed for financial accounting. When the MOOC is successfully completed, learners should have the skills, knowledge, and critical thinking abilities to be able to prepare and analyze a set of basic financial statements.
Various pieces of the MOOC have been developed over the last 20 years, starting with the development of an open educational resource Introduction to Financial Accounting. In 2012, this OER was adopted as the text for Athabasca University's ACCT 253: Introduction to Financial Accounting. This is a required 3-credit course in the Faculty of Business' (FB) Bachelor of Commerce program. Because texts are provided as part of students' tuition and ACCT 253 has over 1,000 enrolments per year, the university reduced its costs of purchasing commercial texts by over $200,000 per year when it adopted this free learning material. Some of these savings were used to contract with a third party, Lyryx Learning, to produce complex, algorithmicallygenerated interactive online activities. These activities and the OER material are housed on Lyryx servers. Students are linked from AU's Moodle learning management system at appropriate places in the course. The Lyryx activities are used for both formative and summative purposes. They provide detailed feedback to students as they progress through the course. By providing a small amount of financial compensation to Lyryx, an agreement was reached to make these ACCT 253 online activities available to MOOC learners as well.
The idea to offer a MOOC in introductory financial accounting had been discussed for several years between two Faculty of Business accounting professors, Dr. Tilly Jensen and the presenter. A serious effort to launch the MOOC commenced in early 2019. Initial meetings were held among the two accounting faculty members, an FB instructional designer, and an interested Moodle developer from the university's Information Technology department. An informal survey of introductory financial accounting MOOCs offered elsewhere was undertaken. Overall, most of these courses were at the graduate level, covered only part of what should constitute a full, three-credit undergraduate accounting course, had rudimentary learning materials, or had fixed start and end dates. A market seemed to exist for a MOOC that would feature: • online learning for anyone with a computer, web browser, and Internet connectivity; • the ability for learners to start the course when desired and proceed at their own pace through the material; • equivalent learning resources and content coverage as an undergraduate introductory financial accounting course; • discussion forums; • a frequently-asked question database; and • the possibility for successful completers to earn credit towards to AU's Bachelor of Commerce degree.
The only significant differences in learning design are that MOOC learners would not have one-on-one access to accounting tutors or AU support staff as they do in ACCT 253. On the other hand, the MOOC is free.
When development commenced, Moodle was used as the University's learning management system. However, implementation of a new interactive learning environment (ILE) was well underway. As a result, university resources were unavailable to develop the MOOC in Moodle. In the fall of 2020, several external MOOC platforms were examined, including Coursera and EdX. However, these organizations' costs structures made the undertaking relatively expensive. Consultations were also undertaken with the University's Centre for Distance Education, which offers several MOOCs in how to learn online and at a distance. These courses use Canvas as the learning management system. However, the terms of use for this LMS limit use to teaching educational technology topics. This precluded course content like introductory financial accounting.
Finally, PowerED was approached in December 2020. This is a separate, for-profit unit within Athabasca University that functions somewhat like a traditional continuing education department. PowerED generally offers non-credit mini-courses, mostly to paying students, mostly about business topics, and mostly with fixed start and end dates. On the other hand, staff did have experience offering non-synchronous courses that could be completed at the desired pace of individual learners, though not as MOOCs.The unit has its own web developers and marketing staff. It contracts with developers as needed to produce branded courses. It has developed an ecommerce site to enrol students and process financial payments, among other capacities. The unit's mandate and operating model fit well with the Introductory Financial Accounting MOOC objectives. PowerED's courses have been offered in D2L's Brightspace interactive learning environment for several years. Propitiously, Brightspace had been recently chosen as the new AU-wide integrated learning environment to replace Moodle.
Over the next several years, the PowerED platform will be gradually merged with that of for-credit academic units.
But start-up funds for the accounting MOOC were hard to obtain. Eventually, the then-Dean of the Faculty of Business agreed to provide $20,000 of internal funds to cover the estimated start-up costs. It was felt that if successful completers could be awarded credit for a required course in the BComm, more program students could be attracted. With these funds, PowerED agreed to develop and host the MOOC. It contracted with D2L to develop the website. Over the next ten months, regular development meetings took place among the two accounting faculty members and PowerED, D2L, and Lyryx staff. Discussions ranged from small issues like font sizes to larger issues such as how to tie in the Lyryx Learning interactive activities to the Brightspace environment and exchange student progress data between the two platforms. Eventually, an LT1 link was designed. This enables "badges" to be automatically issued by Brightspace intelligent agents. These badges are relatively informal, encouraging emails sent to learners when 15%, 30%, 45%, and 60% of the Lyryx interactive activities have been completed.
Concurrently with the development of the website over the next several months, regular discussions ensued among PowerED, Office of the Registrar, and Faculty of Business staff to develop a process whereby successful MOOC completers could be awarded credit for ACCT 253. There were several regulatory hurdles to overcome. The university has a robust transfer credit program but because the MOOC was internal to AU, transfer credit regulations did not apply. A challenge-for-credit option is also available for students with prior accounting experience to gain credit for ACCT 253. However, at $374 the fees were considered too high to attract many potential program students. Eventually it was agreed that a learner who successfully finished 75% of the Lyryx learning activities would be deemed to have fulfilled the course requirements. They would automatically receive an email inviting them to purchase PowerED-issued "certificates of completion" to formally confirm the accomplishment to the participant's employers, for instance. Upon purchase, learners are also informed of the challenge-for-credit option. For no extra charge, they are eligible to write a timed, invigilated, online exam equivalent to the ACCT 253 challenge exam. To be successful, the student must achieve a mark of at least 50%. They can then apply to be awarded ACCT 253 credit on their AU transcript.
The cost of the certificate is $160. This is less than ½ the cost of the usual challenge-for-credit fee because many of the applicable MOOC processes have been automated. For instance, 80% of the challenge exam is automatically marked. Funding has now been obtained to cover marking costs of the other 20%. The process to request the challenge exam, record the final exam mark, and notify the Office of the Registrar of successful completion has also been streamlined using intelligent agents in D2L Brightspace, among other means. As of March 31 2022, there are 1,081 students active in the MOOC. No students have completed at least 75% of material, purchased the certificate of completion, nor written the challenge exam. Badges have been awarded as follows: % of MOOC Completed Number of Learners 15% 7 30% 3 45% 2 60% 1
Proposed Research
Among other criticisms, xMOOCs generally lack interaction among students and between learners and instructors. Interaction tools are provided in the introductory financial accounting MOOC by means of discussion forums. Interaction patterns within various types of discussion forums are the focus of the proposed research program. The study will randomly assign participating students to one of three categories of online interactions. The relative success of learners in each group will be measured. The three categories of interaction are • no discussion group (control group) • discussion group with only student-student interaction • discussion group among students, moderated by an academic expert.
Non-participants will have access to all course materials and online assessments, but not to discussion forums. These are the same resources that are available to the study's control group. Standard demographics like age, gender, and prior educational experiences will be gathered to discover any anomalies across groups that might affect the findings. Within each interaction type, data will be collected to determine: 1. Completion rates, as a measure of persistence; 2. Knowledge acquisition, as measured by pre-and post-tests; 3. Whether 'social presence' effects as defined in the Community of Inquiry (CoI) framework literature are affected by the different levels of learner-learner and learner-instructor interaction within the MOOC's discussion forums; and 4. If differing social presence effects are detected among the groups, whether these correlate with completion and persistence rates.
Review of the Literature
There are several aspects of the proposed study that should contribute to online, higher education research, and in particular the Community of Inquiry (CoI) literature. 1. As reported by Arbaugh et al, (2008), the CoI survey instrument is a robust and well-evaluated questionnaire used to assess three 'presences' within online learning environments -social, cognitive, and teaching. However, it has been used to measure student engagement in only a few MOOC studies (Damm, 2016). Garrison (2018) suggested that the survey could be useful for further MOOC research. Since the proposed research will be conducted across many volunteers, it should expand the scope of CoI research by verifying whether the instrument is a useful measure of social presence in MOOC environments. 2. Other researchers have assessed the use of peer-facilitated discussions to enhance online higher education courses (Karunanayaka et al., 2016). However, differing modes of online, asynchronous communication (peer-facilitated learning; instructor-moderated discussion forums) appear to have been neither incorporated into MOOC research designs nor evaluated for effectiveness. 3. The CoI framework, and the usefulness of social presence, is predicated on the necessity of sustained, contiguous, two-way interaction in online learning environments (Garrison et al., 2000). This assumption has been challenged (Rourke & Kanuka, 2009;Annand, 2011). The proposed study will inform this debate by administering the CoI survey, assessing relative levels of social presence within different types of discussion forums, then correlating any significant differences with observed MOOC completion rates and other learning outcomes.
Research Procedures
A schematic of the proposed research is attached as an appendix. Volunteers will be invited to participate in the study when they enrol in the MOOC. Interested learners will add their information to the appropriate section of the sign-up sheet. At this point they will be randomly assigned to the control group (no discussion forum) or one of two discussion groups (unmoderated student-student forum; instructor-moderated forum).
A 15-item survey will be administered to gather perspectives on who participates in the MOOC and why. The survey should take about five minutes to complete. A ten-item pre-test also will be administered to all participants beforehand to determine their basic accounting knowledge. If a student writes the challenge exam at the end of the course, this pre-test data will be compared with challenge exam results. The results will be used to approximate increases in introductory financial accounting knowledge. Completion rates across each of the three group types also will be compared. A short, four-item exit survey will be conducted with participants who do not complete the course within one year.
A 34-item Community of Inquiry questionnaire will be administered to all participants. The results will be analyzed to determine differences among measures of perceived social presence in each type of discussion forum in particular. Any statistically significant differences will be correlated with knowledge gains (pre-vs. post-test results) and proxies for persistence (completion rates; level of progress through the MOOC) to indicate whether differences in social presence affect these measures. A study size of about 120 learners in each of the three interaction types should identify significant differences with a 95% confidence interval.
Implications for Future Research
Annand (2019) suggested that the value of the CoI framework as an adequate explanatory model for learning in online higher education needs to be more critically examined. The framework is predicated on a social constructivist paradigm that assumes sustained, contiguous communication is necessary for effective learning to occur. Yet it could be argued that most related research is conducted in environments that use learning techniques more often identified with an objectivist-rational paradigm. Therefore, the framework's underlying assumption of the need for sustained communication among learners needs further examination. In addition, the types of questions that could be pursued in CoI research may have been inadvertently limited by unchallenged assumptions that mistake predominant for preferred practice. For instance, many CoI studies have focussed on cohort-based, graduate-level courses with fixed start and end dates and relatively low student to instructor ratios. The design of the proposed study should produce empirical results that inform questions of efficacy and applicability of the CoI framework in a relatively unresearched higher education environment. | 2022-09-30T15:08:34.390Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "d1ca755f2586e0f8bcfeb515442f340a664e1856",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.56059/pcf10.2420",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2ea6b863a1792e5d59a97f1169123c40a3aa6f39",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
1650057 | pes2o/s2orc | v3-fos-license | Protective Effect of Boerhaavia diffusa L. against Mitochondrial Dysfunction in Angiotensin II Induced Hypertrophy in H9c2 Cardiomyoblast Cells
Mitochondrial dysfunction plays a critical role in the development of cardiac hypertrophy and heart failure. So mitochondria are emerging as one of the important druggable targets in the management of cardiac hypertrophy and other associated complications. In the present study, effects of ethanolic extract of Boerhaavia diffusa (BDE), a green leafy vegetable against mitochondrial dysfunction in angiotensin II (Ang II) induced hypertrophy in H9c2 cardiomyoblasts was evaluated. H9c2 cells challenged with Ang II exhibited pathological hypertrophic responses and mitochondrial dysfunction which was evident from increment in cell volume (49.09±1.13%), protein content (55.17±1.19%), LDH leakage (58.74±1.87%), increased intracellular ROS production (26.25±0.91%), mitochondrial superoxide generation (65.06±2.27%), alteration in mitochondrial transmembrane potential (ΔΨm), opening of mitochondrial permeability transition pore (mPTP) and mitochondrial swelling. In addition, activities of mitochondrial respiratory chain complexes (I-IV), aconitase, NADPH oxidase, thioredoxin reductase, oxygen consumption rate and calcium homeostasis were evaluated. Treatment with BDE significantly prevented the generation of intracellular ROS and mitochondrial superoxide radicals and protected the mitochondria by preventing dissipation of ΔΨm, opening of mPTP, mitochondrial swelling and enhanced the activities of respiratory chain complexes and oxygen consumption rate in H9c2 cells. Activities of aconitase and thioredoxin reductase which was lowered (33.77±0.68% & 45.81±0.71% respectively) due to hypertrophy, were increased in BDE treated cells (P≤0.05). Moreover, BDE also reduced the intracellular calcium overload in Ang II treated cells. Overall results revealed the protective effects of B. diffusa against mitochondrial dysfunction in hypertrophy in H9c2 cells and the present findings may shed new light on the therapeutic potential of B. diffusa in addition to its nutraceutical potentials.
Introduction
Heart diseases are one of the leading causes of death worldwide [1]. Hypertension accounts a major risk for the development of cardiac diseases through induction of left ventricular hypertrophy and this ultimately leads to congestive heart failure and death [2]. Cardiac hypertrophy is the enlargement of heart with increase in the volume of cardiac cells and prolonged hypertrophic status has been associated with decompensation of heart function, development of heart failure and sudden death in humans [3]. Oxidative stress induced by various free radicals plays a vital role in the development of cardiac hypertrophy [4]. Mitochondria represent a substantial proportion (,30%) of the heart cell's mass and mitochondrial dysfunction is usually associated with pathological hypertrophy [5]. Dysfunctional mitochondria act as one of the most significant sources of reactive oxygen species (ROS) production in the heart [6]. Angiotensin II is a major component of rennin-angiotensin system that plays a key role in the development of left ventricular hypertrophy [7]. It has been shown that angiotensin II stimulate mitochondrial dysfunction in cardiac cells and subsequently produce excessive amounts of ROS such as superoxide, hydrogen peroxide, and peroxynitrite. This overproduction of mitochondrial ROS has been implicated in heart failure [8]. Since mitochondrial dysfunction plays a critical role in the development of cardiac hypertrophy and heart failure, the mitochondria is emerging as one of the important druggable targets in the management of cardiac hypertrophy and other associated complications.
Natural products are becoming popular throughout the world and widely accepted as an adjunct to conventional therapy [9]. Various epidemiological, experimental and clinical studies have revealed that natural products in the form of functional foods or nutracuticals play an important role in the prevention and management of cardiac diseases in prophylactic way [10,11]. High consumption of plant-based foods is associated with a significantly lower risk of coronary artery disease most likely due to the abundance and variety of bioactive compounds present in it [12,13]. Besides antioxidant activity, natural products have other biological properties like lipid lowering, antihyperglycemic, antihypertensive etc. that lead to reduce the risk of cardiovascular disorders.
Boerhaavia diffusa L. from the family Nyctaginaceae is widely used as green leafy vegetable and an important indigenous medicinal plant with lots of biological properties. The plant is reported to possess cardiotonic and antihypertensive potential [14,15]. Pharmacological studies have demonstrated that B. diffusa possess antioxidant [16], antidiabetic [17], immunomodulatory [18], anticonvulsant, hepatoprotective, antibacterial, antiproliferative and antiestrogenic activities [19,20]. Our previous studies showed the antihypertrophic potential of B. diffusa against angiotensin II induced hypertrophy in H9c2 cells by down regulating oxidative stress along with its potent antioxidant capacity [21].
The present study aims to evaluate the mitochondrial dysfunction in angiotensin II induced hypertrophy in H9c2 cells and the protective effects of B. diffusa against mitochondrial damage in cardiac hypertrophy.
Materials and Methods
B. diffusa were collected from local areas of Thiruvananthapuram, India, identified and authenticated by Dr. H. Biju, Taxonomist, Jawaharlal Nehru Tropical Botanic Garden Research Institute (JNTBGRI), Palode, Thiruvananthapuram, Kerala. No specific permissions were required for the collection of this plant. Plant material is plenty available, widely distributed and is not an endangered or protected species and the GPS coordinates of location of plant collection is 8u 27' 36" North, 76u 59' 41" East. A voucher specimen was kept in our herbarium for future reference (No. 01/05/2010 APNP/CSIR-NIIST). Extraction of the whole plant material was done with ethanol as per our previous reports [21] and the yield of the B. diffusa extract (BDE) was found to be 12.64% (w/w). The same lot of the extract was used to conduct all the experiments.
Cell culture and treatment
The H9c2 embryonic rat heart-derived cell line was obtained from the American Type Culture Collection (ATCC) and were cultured in Dulbecco's modified eagle medium (HiMedia, India) containing 4.5 g/L glucose, 1.5 g/L sodium bicarbonate and 110 mg/L sodium pyruvate, supplemented with 10% fetal bovine serum (Gibco, New Zealand) and penicillin (100 units/ml) and streptomycin (100 mg/ml) in a humidified incubator with 95% air and 5% CO 2 at 37uC. The culture medium was changed every 2 days. Then the cells were passaged and seeded at the density of 3610 5 cells/cm 2 growth area of T75 (75 cm 2 ) tissue culture flask or 1.2610 6 cells per 100 mm dish or 0.64 610 4 cells per 6.4 mm well of 96 well plates. These cells were cultured for 3 days and then underwent treatments.
H9c2 cells were treated with BDE for 6 hrs prior to angiotensin II (Ang II) treatment. Ang II (100 nM) (Sigma-Aldrich, St. Louis, MO, USA) was prepared in double distilled water and diluted with culture media to induce hypertrophy and cultured for an additional 48 hrs [21]. The experimental group consist of (a) Control cells (b) BDE (75 mg/ml) alone treated cells, (c) Ang II (100 nm) alone treated cells, (d) BDE (75 mg/ml) + Ang II (100 nm) treated cells. Dose of the Ang II and BDE was selected based on our previous studies [21].
Induction of hypertrophy was confirmed by determining cell volume, protein content and LDH leakage [21].
Detection of intracellular reactive oxygen species (ROS) and mitochondrial superoxide production Intracellular ROS levels were measured using flow cytometry with fluorescent 2', 7' dichlorodihydrofluorescein diacetate (DCFH-DA) as probe [22]. DCFH-DA is cleaved intracellularly by non-specific esterase and turn to high fluorescent upon oxidation by ROS, which were analyzed with FACS Aria II (BD Bioscience, San Jose, USA).
Mitochondrial superoxide productions in the live cells were evaluated with fluorescent dye, mitoSOX. Briefly after respective treatments, cells were loaded with mitoSOX (5 mM) in the medium and incubated for 20 minutes. For bioimaging (BD Pathway TM Bioimager System, BD Biosciences), the dye was excited at 514 nm as described earlier [23].
Activities of aconitase, thioredoxin reductase, xanthine oxidase and NADPH oxidase Activity of aconitase, thioredoxin reductase and xanthine oxidase was assayed in control and treated cells using respective kits from Cayman chemicals (USA) as per manufacturer's instructions. Activity of NADPH oxidase was done as per the method of Qin et al., (2006) [24].
Determination of mitochondrial transmembrane potential (DYm), integrity of mitochondrial permeability transition pore (mPTP) and mitochondrial swelling Change in DYm was detected using a mitochondria staining kit (Sigma-Aldrich, St. Louis, MO, USA) that uses JC-1, a cationic fluorescent dye. Briefly, the cells were seeded in 96-well black plates at a density of 5610 3 cells per well. After 48 hours of treatment, the cells were incubated with JC-1 stain and incubated for 20 minutes. For imaging of JC-1 monomers, the live cell bioimager (BD Pathway TM Bioimager System, BD Biosciences) was set at 490 nm excitation and 530 nm emission wavelengths, and for J-aggregates, the wavelengths were set at 525 nm excitation and 590 nm emission [25]. Valinomycin was used as positive control.
To examine the mPTP opening, the cells were loaded with calcein-AM (0.25 mM) in the presence of 8 mM cobalt chloride for 30 minutes to quench cytosolic and nuclear calcein fluorescence [25]. The calcein fluorescence is then compartmentalized within mitochondria until PTP opening permits the distribution of cobalt inside mitochondria, which results in the quenching of calcein fluorescence in the mitochondrial matrix. The PTP opening thus leads to the decompartmentalization of calcein fluorescence. Images of cells were taken at 488 nm excitation and 525 nm emissions (BD Pathway TM Bioimager System, BD Biosciences). For the determination of mitochondrial swelling, mitochondria were isolated using a mitochondrial isolation kit from Sigma-Aldrich, (St. Louis, MO, USA). Mitochondrial swelling was determined as per previously described method [26]. In brief, mitochondria (1 mg/ml) were incubated in a total volume of 1.8 ml of respiratory buffer (125 mM sucrose, 50 mM KCl, 5 mM Figure 1. Flow cytometric analysis of intracellular ROS generation in different groups. Analysis of intracellular ROS using fluorescent probe, 2',7'-dichlorfluorescein-diacetate (DCFH-DA) reveals significant increase in ROS generation by Ang II but BDE treatment curtails the same on Ang II application. (i) Control cells (ii) BDE alone treated cells (75mg/ml) (iii) Ang II (100 nm) treated cells (iv) BDE+Ang II treated cells. Population P2 represents the ROS. Results expressed as mean 6 SD; n = 6 and the significance accepted at (P#0.05). doi:10.1371/journal.pone.0096220.g001 HEPES, 2 mM KH 2 PO 4 , 1 mM MgCl 2 at pH 7.2) in the presence of 6 mM succinate at 25uC. Rotenone (2 mM) was added to the buffer just before the experiment. CaCl 2 (100 mM) was used as swelling agent. The change in absorbance was measured at 540 nm and the decrease in absorbance indicates the increase in mitochondrial swelling.
Determination of the activity of mitochondrial respiratory complexes and oxygen consumption assay After respective treatments, mitochondria were isolated using a mitochondrial isolation kit (Sigma-Aldrich, St. Louis, MO, USA) and suspended in 50 mM/L phosphate buffer (pH 7.0). Then it was frozen and thawed 3-5 times to release the enzymes (except complex IV, which was extracted with 0.5% Tween 80 in phosphate buffer, v/v). The effect of BDE on complex I-mediated electron transfer (NADH dehydrogenase) was studied using NADH as the substrate and menadione as electron acceptor. The reaction mixture containing 200 mM menadione and 150 mM NADH was prepared in phosphate buffer (0.1 M, pH 8.0). To this mitochondria (100 mg) was added, mixed immediately and observed quickly for change in the absorbance at 340 nm for 8 minutes (UV-2450 PC; Shimadzu, Kyoto, Japan) [27]. Rotenone (10 ìM) was used to inhibit the complex I.
Complex II mediated activity (succinate dehydrogenase) was measured spectrophotometrically at 600 nm using dichlorophenolindophenol (DCPIP) as an artificial electron acceptor and succinate as substrate. The extent of decrease of absorbance (DOD) was considered as the measure of the electron transfer activity of complex II [27]. The reaction mixture was prepared in 0.1 M phosphate buffer (pH 7.4) containing 10 mM EDTA, 50 mM DCPIP, 20 mM succinate and mitochondria (50 mg). The change in absorbance was observed immediately for 8 minutes at 30uC. Malonate (25 mM) was used to inhibit the complex II.
Complex III (Ubiquinol-cytochrome c reductase) activity was determined as per the method described previously [28]. In brief mitochondrial protein (50 mg) was mixed with 100 mM/L EDTA, 2 mg BSA, 3 mmol/L sodium azide, 60 mM/L ferricytochrome C, decylubiquinol (1.3 mM) and phosphate buffer (50 mM, pH 8) in a final volume of 1 ml. The reaction was started by the addition of decylubiquinol and monitored for 2 min at 550 nm and again after the addition of 1 mmol/l of antimycin A. The activity was calculated from the linear part of absorption-time curve, which was not less than 30 seconds. Activity of complex III was expressed as mmoles of ferricytochrome C reduced/min/mg protein. Antimycin A (10 mM) was used as standard inhibitor of complex III.
Activity of complex IV (cytochrome C oxidase) was determined as per previous method [28]. Briefly 1 ml of ferrocytochrome C solution was mixed with approximately 10 mg of mitochondrial protein (extracted in 0.5% Tween 80 in 30 mmol/L phosphate buffer, pH 7.4) and phosphate buffer in a net volume of 1.3 ml. The reaction was started by the addition of enzyme source and was monitored at 550 nm with an interval of 15 seconds for 4 min. The difference in absorbance was calculated from the linear part of the absorption-time curve. KCN (5 mM) was used as inhibitor of complex IV. Complex (IV) activity was expressed as micromoles of ferrocytochrome C oxidized/min/mg protein using the extinction coefficient 21 mM 21 cm 21 .
Oxygen consumption rate in control and treated cells were assayed using Cayman's cell based oxygen consumption rate assay kit using antimycin A as standard inhibitor (Cayman Chemicals, Ann Arbor, USA)
Intracellular calcium ([Ca 2+ ]i) overload and the activity of calcium ATPase
[Ca 2+ ]i overload was detected by staining the cells after respective treatments with Fura-2AM for 20 min at 37uC and the images were visualized using BD Pathway TM Bioimager System; BD Biosciences [29].
Activity of calcium ATPase was evaluated as per previous method [30]. In this assay, 0.1 ml of cell lysate was added to the reaction mixture composed of 0.4 M Tris HCl, 15 mM NaN 3 , 0.2 mM EDTA, 120 mM CaCl 2 , 20 mM MgCl 2 to all the tubes. Then 0.2 ml of ATP (3 mM as substrate) was added to the test tubes. All the tubes were incubated for 30 min in a water bath at 37uC and the enzyme activity was stopped by adding 2 ml of 10% trichloroacetic acid (TCA). All the tubes were then centrifuged at 2,500 rpm for 10 minutes to collect supernatant. The protein-free supernatant was then analyzed for inorganic phosphate. For that 3 ml of the supernatant was treated with 1 ml of ammonium molybdate and 0.4 ml of 1-amino 2-naphthol 4-sulphonic acid (ANSA) and then absorbance was read at 680 nm after 20 min.
Statistical analysis
Results were expressed as means and standard deviations (SD) of the control and treated cells from three independent experiments in duplicates (n = 6). Data were subjected to one-way ANOVA and the significance of differences between means was calculated by Duncan's multiple range test using SPSS for Windows, standard version 11.5.1 (SPSS, Inc.), and significance was accepted at P#0?05.
Cell volume, protein content and LDH leakage in control and hypertrophied cells
Induction of hypertrophy by Ang II in H9c2 cells was confirmed by measuring cell volume, protein content and LDH leakage
Effect of BDE on intracellular ROS and mitochondrial superoxide production
Flow cytometry analysis of ROS showed that Ang II significantly (P#0.05) elevated the intracellular ROS level (26.2560.91%) in H9c2 cells than that of control (Fig. 1). Ang II induced ROS generation was significantly reduced (P#0?05) by the treatment with BDE when compared to Ang II alone treated cells.
In addition, there was an increased generation of mitochondrial superoxide radicals (65.0662.27%) in hypertrophied cells compared to control cells while BDE treatment significantly reduced
Activities of aconitase, thioredoxin reductase, xanthine oxidase and NADPH oxidase
Activities of aconitase and thioredoxin reductase were significantly reduced in Ang II induced hypertrophied cells (33.7760.68% & 45.8160.71% respectively) whereas activities of xanthine oxidase and NADPH oxidase were significantly elevated (84.1760.87 & 137.7860.93% respectively) when compared with control cells. BDE treatment reversed these changes significantly (P#0.05) and brought back the activity near to normal ( Table 2). The opening of mPTP was examined using calcein-AM staining combined with CoCl 2 . Calcein-AM freely passes through cellular membranes, and the esterases in the cells cleave the acetomethoxy group to yield the fluorescent calcein. Co-loading of cells with CoCl 2 quenches the fluorescence in the cell, except in mitochondria, since CoCl 2 cannot cross mitochondrial membrane. Therefore, during the opening of mPTP, mitochondrial calcein is also quenched by CoCl 2 , resulting in reduced fluorescence [31,32]. Integrity of mPTP was altered significantly in Ang II treated hypertrophied cells compared to control cells which was evident from reduced calcein fluorescence (Fig. 4A & 4B). Presence of BDE protected the integrity mPTP in Ang II treated H9c2 cells.
Effects of BDE on DYm and mPTP and mitochondrial swelling
Investigation on mitochondrial swelling is one of the methods for the assessment of mitochondrial membrane integrity. H9c2 cells exposed to Ang II showed increased mitochondrial swelling
Oxygen consumption rate in control and treated cells
Oxygen consumption rate in living cells were analyzed by using a phosphorescent probe, mitoXpress and the reduction in fluorescent/phosphorescent signal over time indicates lower oxygen consumption rate in the cells. Hypertrophied cells showed reduced oxygen consumption rate when compared to control cells and treatment with BDE reversed these changes near to normal (P#0.05) indicates BDE protects against mitochondrial dysfunction in hypertrophy (Fig. 6).
[Ca 2+ ]i overload and the activity of calcium ATPase
Ang II induced [Ca 2+ ]i overload in H9c2 cells which was evident from increased Fura-2AM fluorescence (Fig. 7A & 7B) whereas activity of calcium ATPase (Fig. 8)
Discussion
Alteration in mitochondrial function plays a key role in the pathogenesis of cardiac hypertrophy that may ultimately leads to heart failure [6]. The heart has continuous demands for high energy and the adequate supply of ATP is critical for electrical and mechanical functions of heart [33]. Over 90% of energy consumption of the heart is from mitochondria and it plays key role in many cellular functions including energy production, calcium homeostasis and cell signalling [34]. Recent reports reveal that crisis in energy production due to impaired mitochondrial function would result in cardiometabolic diseases [35]. Recently, the significance of metabolic remodelling process in the hypertrophic growth response of the heart has been identified [36]. All these information categorically declare the profound importance of mitochondria in cardiac hypertrophy and other heart disorders. Mitochondria, the major site of ROS generation as a by product of oxidative phosphorylation and ROS plays a critical role in the development of Ang II induced cardiac hypertrophy [7]. Significant changes in mitochondrial function as well as mitochondrial energetics have been described in various forms of cardiac hypertrophy [37]. Swollen cardiac mitochondria with disrupted cristae and substantial mitochondrial DNA depletion along with reduction in the activities of respiratory chain complexes were also observed in hypertrophic cardiomyopathy [38]. The possible potential mechanisms of mitochondrial dysfunction in pathological hypertrophy include ROS, cardiolipin loss or peroxidation, mitochondrial uncoupling, impaired mitochondrial biogenesis, reduced transcriptional signalling of regulators of mitochondria etc. [37].
The present study demonstrates for the first time that ethanolic extract of B. diffusa (BDE) attenuates hypertrophy induced mitochondrial dysfunction in heart-derived H9c2 cells. Our previous studies have revealed that BDE protects H9c2 cardiomyoblasts against Ang II induced hypertrophy via its potent antioxidant activity [21]. Elevated levels of intracellular ROS (Fig. 1) along with surplus generation of mitochondrial superoxide radicals in hypertrophied cells ( Fig. 2A & 2B) indicate the development of oxidative stress during hypertrophy. Increased superoxide radical generation affect the normal functioning of mitochondria and that to the progression of left ventricular hypertrophy [39]. Reduced generation of intracellular ROS and mitochondrial superoxide radicals in BDE treated cells shows the free radical scavenging potential of the extract (Fig. 1, 2A & 2B). NADPH oxidase and xanthine oxidase are two important enzymes that play significant role in cardiovascular pathology and these are the major enzymatic source of ROS in cardiovascular system [40,41]. Increase in the activities of these enzymes leads to increased production of superoxide radicals that ultimately lead to cardiac dysfunction [42]. Previous reports also suggest that NADPH dependant superoxide radical generation is associated with the development of cardiac hypertrophy [24] and the increased production of mitochondrial ROS by Ang II is mediated through NADPH oxidase [8]. It is interesting to note that treatment with BDE significantly prevented the alteration of these enzymes in the cells exposed with Ang II. Reduced activities of aconitase and thioredoxin reductase in hypertrophied cells again indicate mitochondrial dysfunction via excessive production of ROS. Reduced activity of mitochondrial aconitase is an indicator of mitochondrial superoxide production [43] and there is an inverse relation between superoxide production and activity of aconitase in cardiac hypertrophy [44]. Reports suggest that thioredoxin reductase can attenuate cardiac hypertrophy not only by scavenging ROS but also involved in several steps of redox regulation of cell [45]. Here also BDE treatment restored the activities of aconitase and thioredoxin reductase in hypertrophied cells.
DYm is essential for normal mitochondrial function and dissipation of DYm indicates mitochondrial dysfunction [25]. Mitochondrial permeability transition is involved in the control of mitochondrial calcium homeostasis and apoptosis [46] and swelling of mitochondria is known to correlate with mitochondrial dysfunction and damage [37]. The present study reveals significant changes in DYm (depolarization) (Fig. 3A & 3B), integrity of mPTP ( Fig. 4A & 4B) and mitochondrial swelling (Fig. 5) in hypertrophied cell. Depolarization of DYm by Ang II was dependent on increased NADPH oxidase activity and ROS [8]. Alteration in DYm may lead to the uncoupling of respiratory chain, and this accompanies mPTP opening [46] and the activation of mPTP opening disrupts the permeability barrier of the inner mitochondrial membrane, causing uncoupling of oxidative phosphorylation, osmotic swelling, and rupture of the outer membrane and ultimately cell death [34,47]. One of the main events that are thought to trigger mitochondrial dysfunction is mPTP, with subsequent opening of the mitochondrial pore and mitochondrial swelling [48]. This is a clear cut indication of the role of mitochondria in angiotensin II mediated hypertrophy in heart. BDE treatment was found to prevent the changes in DYm, mPTP and mitochondrial swelling significantly in Ang II induced hypertrophied cells suggest that BDE can attenuate mitochondrial alterations in hypertrophied cells.
Excessive production of ROS impairs the activities of respiratory chain complexes which are very important in the biology of heart [49]. Generally, the impairment of complex I and III activities may increase the electron leakage from the electron transport chain, generating more superoxide radicals and perpetuating a cycle of oxygen radical induced damage to mitochondrial membrane constituents [49]. Activities of mitochondrial respiratory complexes were significantly reduced in hypertrophied cells suggesting the role of oxidative stress and reduced activities of respiratory complexes is reported to increase mitochondrial ROS production [8]. A reduction in complex I enzyme activity leads to accumulation of electrons in the initial part of the transport chain which facilitates direct transfer of electrons to molecular oxygen that results in the generation of superoxide radicals [50]. In addition, superoxide radicals can react with nitric oxide radical to form highly toxic peroxynitrite radical which in turn can cause serious mitochondrial dysfunction by damaging respiratory complexes [8]. BDE treatment protected the activities of these electron transport chain complexes from the deleterious effect of Ang II on myoblasts.
Oxygen consumption rate is an important indicator of normal cellular function and unhealthy cells with dysfunctional mitochondria show a lower oxygen consumption rate when compared to healthy cells. Since most of the oxygen consumption is via. mitochondria, oxygen consumption rate has been used as a parameter to study mitochondrial function [51]. In our study, reduced oxygen consumption rate in hypertrophied cells further supports the mitochondrial dysfunction and BDE treatment attenuated the reduction in oxygen consumption in H9c2 cells (Fig. 6). Ang II reduces oxygen consumption [52] and there were reports that pathological hypertrophy is associated with mitochondrial dysfunction and reduced oxygen consumption [37] and Ang II.
Mitochondria play an important role in cellular Ca 2+ homeostasis [53]. [Ca 2+ ]i overload, as a consequence of dysregulation of Ca 2+ homeostasis, leads to cardiac dysfunction and heart failure [54]. In our study, ([Ca 2+ ]i) overload and reduced activity of Ca 2+ ATPase in Ang II treated cells (Fig. 7A [53]. In addition to this, [Ca 2+ ]i overload can also enhance mitochondrial ROS production by increasing metabolic rate which in turn leads to respiratory chain electron leakage. Furthermore, Ca 2+ can enhance the dislocation of cytochrome C from the mitochondrial inner membrane and this result in an effective block of the respiratory chain at complex III, which would enhance ROS generation [55]. Since mitochondrial oxidative damage plays significant role in cardiac dysfunction, protecting mitochondria from oxidative damage should be an effective therapeutic strategy. Scavenging ROS within the mitochondria may protect the heart against the development of heart failure and make it more resistant to stressful stimuli [56]. Our previous studies with Boerhaavia diffusa have demonstrated the antioxidant and antihypertrophic potential in H9c2 cells [16,21]. BDE contains various bioactive phenolic compounds that are potent antioxidants and plays a significant role in the management of diseases associated with oxidative stress. In our study, total phenolic content (TPC) of the BDE was estimated to be 123.7663.43 mg gallic acid equivalents/g extract and total flavonoid content (TFC) was estimated to be 62?516 3.19 mg catechin equivalents/g extract. Various active compounds in B. diffusa include punarnavine, ursolic acid, punarnavoside, liriodendrin, eupalitin, eupalitin-3-O-â-D-galactopyranoside, rotenoids like boeravinones A, B, C, D, E, F and G, quercetin, kaempferol, etc. [21,57]. Among these, quercetin exhibits antioxidant, antihypertrophic and antihypertensive potential in in vitro and in vivo experimental models [58,59]. Ursolic acid is reported to possess cardioprotective potential via inducing uncoupling of mitochondrial oxidative phosphorylation and reducing mitochondrial H 2 O 2 production [60]. Eupalitin-3-O-â-D-galactopyranoside is reported to possess immunosuppressive properties and it inhibits the nuclear translocation of NF-êB [61]. Kaempferol is also reported to possess cardioprotective potential and boeravinone G is another antioxidant and genoprotective compound in B.diffusa [62,63]. Liriodendrin isolated from B.diffusa is reported to possess Ca 2+ channel antagonistic properties in heart [64]. Presence of these active constituents might be responsible for its protective activity against Ang II induced hypertrophy. Overall results reveal that angiotensin II induces alterations in mitochondrial function in H9c2 cells and BDE protects the mitochondria from the deleterious effects of angiotensin II by reducing ROS levels, dissipation of transmembrane potential, opening of mitochondrial permeability transition pore, mitochondrial swelling and enhancing the activities of mitochondrial electron transport chain complexes, aconitase, thioredoxin reductase and also maintained calcium homeostasis through its phenolic mediated antioxidant potential. The outcome of this study shows the possibilities of nutraceuticals from this edible medicinal plant, Boerhaavia diffusa for cardiovascular diseases which is a major health issue of the present century. However, further detailed studies are required to establish its molecular mechanisms and therapeutic potential for the maximum utilization of this green leafy vegetable. | 2016-05-12T22:15:10.714Z | 2014-04-30T00:00:00.000 | {
"year": 2014,
"sha1": "0efeba85b752993f6d9f8c29f55608448016f33d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096220&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0efeba85b752993f6d9f8c29f55608448016f33d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256865539 | pes2o/s2orc | v3-fos-license | Estimated Dietary Fluoride Intake by 24-Month-Olds from Chocolate Bars, Cookies, Infant Cereals, and Chocolate Drinks in Brazil
The use of fluoride (F) in the prevention of dental caries is established. However, a high amount of F intake during tooth development can cause dental fluorosis The aim of this study was to analyze variations in F concentrations in chocolate bars (CB), chocolate cookies (CC), infant cereals (IC), and chocolate milk drinks (CD) to determine the daily intake of F from different sources by children at the age of risk for developing dental fluorosis. Distinct brands of CB, CC, IC, and CD were analyzed. Fluoride was separated by hexamethyldisiloxane-facilitated diffusion. Analysis was made in triplicate with an F ion-specific electrode. F ingestion (mg/kg body weight) was evaluated with the suggested consumption (0.05–0.07 mg/kg/day) for children aged 24 months (12 kg). The concentrations for all the analyzed products ranged from 0.025 to 1.827 µg/g F. The mean (range) F concentrations were CB= 0.210 ± 0.205 µg/g (0.073–0.698, n = 8), CC = 0.366 ± 0.416 µg/g (0.320–1.827, n = 9), IC = 0.422 ± 0.395 µg/g (0.073–1.061, n = 5), and CD = 0.169 ± 0.170 µg/mL (0.025–0.443, n = 12). The products that had the highest concentration in the categories CB, CC, IC, and CD, respectively, were Nescau-Ball (0.698 µg/g), Passatempo (1.827 µg/g), Milnutri (1.061 µg/g), and Toddynho (0.443 µg/mL). The consumption of only one unit of Toddynho (CD) is equivalent to more than 11% of the maximum suggested daily intake for a 24-month-old child (0.07 mg/kg body weight). When one product from each category is consumed together only once a day, this consumption is equivalent to approximately 24% of the suggested daily intake of fluoride for a 24-month-old child. The presence of high levels of fluoride in certain products suggests that they play a significant role in overall fluoride intake. It is crucial to closely monitor the fluoride content of food and drinks that are consumed by children who are at risk for dental fluorosis, and for product labels to clearly display the fluoride concentrations.
Introduction
Fluoride (F) has been studied since the last century as a major cariostatic agent. The use of F in the dental area has brought great positive impacts in the area of public health [1]. However, excessive ingestion of F during the period of tooth formation may lead to dental fluorosis (DF). There are several sources of F intake, such as oral hygiene products, water, and food [2][3][4][5].
Studies have shown that ages between 6 and 9 months present great potential for the development of DF in the first dentition, with a higher prevalence in the first and second deciduous molars [6][7][8]. On the other hand, up to the ages of 6 to 8 years, all permanent teeth have an increased risk of being affected by DF [9][10][11]. However, for the maxillary central incisors that are of greatest cosmetic importance, the critical period of susceptibility to DF comprises the first 3 years of life [12]. Recent studies have shown an increase in the prevalence of DF, both in the primary [7,8,13] and in the permanent [9,10,14] dentitions. Therefore, it is extremely important to assess children's sources of F intake individually, as total F intake impacts the development of DF [5,15,16].
The 'optimal' level of F intake that provides maximum protection against caries with minimum risk of provoking DF is not known so far [16]. A plethora of factors interfere in the metabolism of F, affecting the balance between F intake and retention by the organism and, therefore, modifying the risk of developing DF [3]. This might be responsible for the current unavailability of a Dietary Reference Value for F [17]. The Institute of Medicine of the USA (IOM) elaborated on what would be the adequate intakes (AIs), based on the minimization of caries and the reduction of the adverse effects of F on the health of individuals as a reference. AI values were 0.01 mg/day for infants from birth to 6 months and 0.05 mg/kg of body weight/day for children older than 6 months and adults. The Institute even suggested a daily intake of 0.1 mg/kg of body weight/day as the upper limit of F for infants and children up to 8 years of age [18]. Due to the lack of evidence on the 'optimal' level of F intake, the range of 0.05-0.07 mg/kg body weight/day was empirically established and is still employed [19].
In addition, studies have indicated changes in children's eating patterns over the last three decades [3,20]. This may have been attributed to the consumption of different types of food, perhaps due to more industrialized options, as well as to different methods of analyzing the F that is present in the products that are more commonly available in the market [21,22]. Currently, it is observed that in different age groups, children are exposed to distinct sources of F, but there are only a few studies that report the risk-benefit relationship of F intake through consumption of foods and beverages [9,23]. Moreover, the F concentration is not stated on the products' labels, which means that it is necessary to analyze their F content in order to provide information to parents of children at the age of risk to DF on products that could potentially be important contributors for the total daily F intake of their children.
Thus, in the present study, the amount of F present in infant cereals, chocolate drinks, chocolate bars, and chocolate biscuits that are commonly available in Brazil was evaluated. The concentrations found were employed to estimate the contribution of these foods and beverages for the total daily F intake of 24-month-old children (~12 kg), who are subject to a greater risk of developing DF fluorosis in the upper central incisors.
Materials and Methods
A total of five samples of infant cereal products (IC), twelve chocolate milk products (CD), eight chocolate bars (CB), and nine chocolate cookies (CC) of different brands (Table 1) were purchased in the market of Bauru and Sorocaba, São Paulo, Brazil, with 3 batches of each product. The products were chosen because they are very popular among babies and young children, have attractive packaging for this public, and are available in all Brazilian states. Moreover, some of these products have been shown in previous studies [13,20,[24][25][26][27] to have high F content. In addition, some products had a lower sample size due to the low variability of brands in the market, such as infant cereals. Foodstuff type, brand name, manufacturer's name, and place of production were indicated for each studied product (Table 1).
Preparation and Fluoride Analysis
The packages were only opened on the day of the analyses. The amounts of ICs and CDs weighed 0.4 g and 0.4 mL, respectively. CBs were initially frozen and then grated and then weighed (0.2 g). For the CCs, the fillings were separated from the wafers, and both were macerated. After this preparation, 0.3 g was used for F analysis. All the samples were placed in plastic Petri dishes. Each item was weighed and analyzed in triplicate. All the samples were diluted in deionized water (Purelab Option-Q, Veolia Water Technologies, Buenos Aires, Argentina). Knowing the exact amounts of ICs, CDs, CBs, and CCs it was possible to calculate the F content of the original, dry products.
Fluoride determinations were performed as previously described [28], after overnight hexamethyldisiloxane (HMDS)-facilitated diffusion, using the F ion-specific electrode (model 9409, Orion Research, Cambridge, MA, USA) coupled to a calomel reference electrode (Accumet, #13-620-79). Standards containing 0.25, 0.5, 1.0, 5.0, 10.0, and 50.0 nM F were prepared and diffused along with the samples to be analyzed. The millivoltage readings were converted to µg F using a standard curve with a coefficient of r 2 ≥ 0.99. All samples were analyzed in triplicate. Table 1 shows the name, manufacturer, and production site of the infant foods and beverages analyzed.
Results
The results that were obtained for the ICs are shown in the Table 2, as well as the brands, manufacturers, and the mean F concentrations, expressed in µg/g. The ICs group had a mean [F] ± SD (amplitude; µg/g) of 0.422 µg/g ± 0.395 (0.073-1.061, n = 5). The suggested range of F intake is 0.05 to 0.07 mg F/kg body weight/day. Therefore, the IC results showed that for the two brands with the highest concentrations [Milnutri Arroz e Aveia (1.061 µg/g-Danone) and Neston Vitamina (0.496 µg/g-Nestlé)], 30 g is equivalent to almost 4%-5% and 2%-3%, of the maximum daily F intake for a 24-month child (12 kg), respectively, based on the suggested consumption. The mean [F] SD and amplitude (unit µg/g) of the chocolate drinks (CD) were 0.169 µg/g ± 0.175 (0.025-0.443, n = 12), shown in Table 3. In this group we found a slightly high F concentration in two brands: Nescau Zero Lactose (0.425 µg/g-Nestle) and Toddynho (0.443 µg/g-Pepsico). The consumption of one unit (200 mL) of Nescau Zero Lactose or Toddynho represents 10-14% and 11-15% of the daily range of F intake for a 24-month-old child (12 kg), respectively. Most of the products from the chocolate bars (CB) category in the Table 4 had low F concentrations, except for the Nescau Ball group (0.698 µg/g-Nestle), which in terms of the range of daily F intake represents almost 3-5% of the suggested consumption (40 g). The mean [F] SD and amplitude (unit µg/g) of CB was 0.210 µg/g ± 0.192 (0.073-0.698, n = 8). The chocolate cookies (CC) showed in the Table 5 only two brands with lower F concentration. The mean [F] SD and amplitude of the CC group (unit µg/g) was 0.849 ± 0.392 (0.320-1.827, n = 9). The only brand with lower F concentration was Nikito chocolate (0.320 µg/g-Vitarella). On the other hand, all other brands showed higher F concentrations. The product with the highest F concentration was Passatempo chocolate (1.827 µg/g-Nestlé), which represents 7-9% of the range of daily F intake for a 24-month child (12 kg).
Discussion
Even though F is not generally regarded as an essential element, it has an important role on the mineralization of hard tissues and on caries control, and is included in the list of ultratrace elements (an element with an established or estimated requirement, generally indicated in µg/day for humans) [29]. There are three main ways to deliver F to control caries: community-based methods (such as fluoridated water, salt, and milk), professionally administered methods (such as F gels and varnishes), and self-administered methods (such as toothpastes and mouthwashes). The best caries-preventive effect of F occurs when the ion is present, even in low concentrations, in the fluid phases of the oral environment surrounding the teeth [30,31]. The mode-of-action of F is essentially post-eruptive and can be attributed mainly to its influence on the de-and re-mineralization kinetics of dental hard tissues [1,32,33].
Although evidence on the dental effects of topical F is widely accepted, the risk-benefit ratio of systemic exposure to F ingestion is a complex scenario, as excessive intake during tooth development can increase the risk of DF [34]. Several studies have reported an increase in the occurrence of this dental condition in both primary [7,8,13] and permanent dentitions [9,10,14,35]. Studies on the timing of F intake and fluorosis have focused on the most aesthetically important teeth, the maxillary central incisors [10][11][12]36,37]. In a meta-analysis aimed at defining a "risk period" for the development of fluorosis in upper permanent central incisors, Bardsen [11] concluded that the duration of exposure to F during the process of amelogenesis is a significant predictor of risk for fluorosis. Additionally, it was determined that it is challenging to identify specific periods as being more hazardous. Evans et al. [36] found that males are most susceptible to fluorosis of maxillary central incisors around 15-24 months, and females around 21-30 months. In addition, children who were exposed to high levels of F during their first and second years of life had an increased risk of developing DF on their maxillary and mandibular central incisors, as well as their first molars [12,37].
Recent trends in infant feeding habits and parenting practices have led to increased consumption of processed foods that may contain high levels of F [38,39], which may contribute to the development of DF. Therefore, it is particularly important to analyze the F content of foods consumed by children more often. Due to this, we focused on choosing these popular Brazilian products to evaluate their F concentration.
Conducting studies on food intake in 24-month-olds is crucial, as it has been shown that at this age, dietary sources contribute to 53% of their F intake [5]. Research has suggested that a total F intake of 0.05-0.07 mg/kg body weight per day in children can provide dental health benefits. However, it is important to ensure that F intake does not exceed this level to minimize the risk of DF, particularly during the process of enamel formation [19]. Considering this range, the total F intake for a 24-month-old child weighing 12 kg varies from 0.6 to 0.84 mg F per day. Due to the wide variety in F-containing commercial products for 24-month-old children, the selected products were based on previous studies in which similar products were shown to have a high F content [20,[24][25][26][27]. Moreover, some of them were chosen because they are in attractive and colorful packaging with children's characters. To make easier the analytical procedures and discussion of the results, the products were divided in groups: infant cereals, chocolate drinks, chocolate bars, and chocolate cookies.
A large variation in F concentration is observed in the literature for infant cereals. A study by Dabeka et al. [40] analyzed 334 commercial infant foods in Canada and found that infant cereals had a range of F concentrations between 1.24 and 4.89 µg/g. Additionally, Buzalaf et al. [24] found that F concentrations in infant cereals ranged between 0.43 and 6.64 µg/g, and observed consistency in values that were obtained from the same product manufactured on different dates. In 2004, Buzalaf et al. [26] found higher F concentrations (between 2.11 and 7.84 µg/g) than in 2002. Another study reported similar values (4-6 µg/g) [39]. All the cereals that wer analyzed in this study, except one, had low F concentrations (between 0.073 and 1.061 µg/g). This is in agreement with Vlachou et al. [41] and Wiatrowski et al. [42] (0.01-0.31 µg/g; 0.63-1.17 µg/g), respectively. This difference in F values found in cereals may be related to the production of these foods with the use of fluoridated water with different concentrations of this chemical element.
The F concentrations in chocolate drinks were found to vary greatly among the brands that were analyzed, ranging from 0.025 to 0.443 µg/mL. There were two brands, Nescau Zero Lactose (Nestlé ® ) and Toddynho ® (Pepsico) that were found to have F concentrations above 0.4 µg/mL, with measurements of 0.425 µg/mL and 0.443 µg/mL, respectively. Despite previous studies of our group already identifying high levels of F in chocolate milk, the source of F could not be found [20,24,26]. Some brands of chocolate milk were found to have fluoride levels that exceeded the threshold dose associated with the development of dental issues, results that are consistent with the literature [20,24,26]. One unit of Toddynho ® (Pepsico) or Nescau Zero Lactose (Nestlé ® ) can reach 15% or 14% of the maximum daily F intake for a 24-month child (12 kg), respectively. Thus, it is of great importance that there is monitoring of the amount of intake of these chocolate drinks by parents.
Most of the products from chocolate bars had low F concentrations (the lowest was Tortuguita brigadeiro with 0.073 µg/g), except for the Nescau Ball group (0.698 µg/g-Nestlé). These findings are consistent with the literature that reports an F concentration ranging from 0.07 to 1.60 µg/g [25]. A packet of Nescau ball (75g) is equivalent to 6-9% of total daily F intake for a 24-month child (12 kg). The chocolate cookies showed only two brands with lower F concentrations. On the other hand, all the other brands showed higher F concentrations. Passatempo chocolate had the highest F concentration (1.827 µg/g-Nestlé). Even though high concentrations are found, they are still below those that are reported in the literature, 6.9-13.7 µg/g [26] and 7.1 µg/g [25]. When a 2-year-old child weighing 12 kg consumes only three units of Passatempo chocolate once a day, it accounts for up to 7% of their maximum recommended daily F intake of 0.07 mg/kg. It is interesting to mention that three units of Passatempo, according to the manufacturer, contains 19 g of carbohydrates. This covers only around 10% of the daily carbohydrates demand of a 2-year-old child, considering a diet of 1300 kcal, which means that it is likely that one child consumes more than three units on a single day.
Many studies measure the amount of F in foods [15,20,24,25,27], but it is important to also consider how much of that F is absorbed. Since it can be difficult to conduct human bioavailability studies, determining the F that can dissolve in the gastric juice is a key area of focus. Buzalaf et al., in 2004, found that all of the F that was present in cereals was soluble (SF). However, in chocolate-flavored milk, only about half of the total F (TF) was SF. Similarly, for biscuits, only around 20% of TF was SF. This may be because high levels of calcium in milk and calcium-rich biscuits, as indicated on their labels, can bind to F and make it less available for absorption. The study suggested that certain cereals, beverages, and biscuits may be significant sources of daily F intake [26]. Surprisingly, among the chocolate cookies, Passatempo that had the highest concentration of F is also the cookie that has the highest amount of calcium according to the nutritional information on the label.
In another study, Trautner and Einwag [43] suggested that the formation of calcium salts and entrapment of F in the coagulation products of milk can reduce F bioavailability. They also propose that by prolonging the stay of chyme after consuming food, increases the bioavailability, since the digestion processes can liberate F from bound forms and coagulation products [43]. In addition, Nopakun et al. [44] reported that only a quarter of F absorption occurs in the stomach, with the remainder taking place in the small intestine. This suggests that the daily F intake that is estimated through the measurement of hydrochloric acid (HCl) SF may be an underestimation. In addition, there is some evidence that lower intake of calcium and vitamin D in lower socioeconomic status adolescents increases the bioavailability of F and the risk of severe fluorosis [45]. In fact, the bioavailability of F in vivo is complex and definitive conclusions can only be drawn through in vivo studies that are conducted on human subjects.
It is possible that the high F concentrations that were identified in these products may represent substantial contributions to the overall daily F intake. For a 2-year-old, 24% of the maximum suggested recommended daily F intake is 0.07 mg F/kg/day. When one portion of the product with the highest F content from each category is consumed only once a day, the total amount of F that is ingested reaches about 24% the upper limit (0.07 mgF/kg/day) of the F intake that is regarded as "optimal" to prevent caries with minimum risk of causing DF [3]. By identifying potential sources of high F intake, recommendations may be to reduce consumption of these sources in patients who may be at risk for DF.
The wide range of F content within food and beverage groups verified in the UK fluoride database [34], in several Brazilian studies [20,[24][25][26][27] and in a recent study in the US [21] accentuates the need for comprehensive F labeling of foods and beverages, especially those that are frequently consumed by infants and young children. Monitoring F intake can be challenging and labor-intensive, requiring the assessment of F ingestion from both diet and toothpaste. A study that was conducted in the US reported that some of the analyzed bottled waters that were intended for infants did not meet the American Dental Association's (ADA) recommendation to prevent fluorosis [46]. Recently, the US Food and Drug Administration issued mandatory identification of bottled water containing F, as well as amended the allowed level for F in bottled water to which F is added to 0.7 mg/L [47]. In Europe, fewer than 26% of the brands of bottled water label their F content [47]. In Brazil, as well as in Europe, manufacturers are not required to include information about the F content on food and beverage labels [34,48]. Control of F intake, however, would be facilitated by the labeling of F content on food and drink products [34]. The identification of potential sources of excessive F intake allows the formulation of appropriate recommendations, which may include the reduction of consumption of these sources by individuals who are at risk for DF.
Conclusions
The high concentrations of fluoride that are found in some products in the present study highlight the importance of measuring the F content of foods and beverages that are consumed by young children. It is imperative to consider these products as potentially significant contributors to overall F intake. In addition, it is recommended that the F concentrations of these products be clearly stated on their labels for consumer awareness and informed decision-making. | 2023-02-15T16:10:45.413Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "8af63354ced80304acfd9f6e0467bfae88314653",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/4/3175/pdf?version=1676096387",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d3d776c300224051240ccaae0bd6f7a36be7e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238849775 | pes2o/s2orc | v3-fos-license | The isotopic signature of the “arthropod rain” in a temperate forest
Forest canopy is densely populated by phyto-, sapro-, and microbiphages, as well as predators and parasitoids. Eventually, many of crown inhabitants fall down, forming so-called ‘arthropod rain’. Although arthropod rain can be an important food source for litter-dwelling predators and saprophages, its origin and composition remains unexplored. We measured stable isotope composition of the arthropod rain in a temperate mixed forest throughout the growing season. Invertebrates forming arthropod rain were on average depleted in 13C and 15N by 1.6‰ and 2.7‰, respectively, compared to the soil-dwelling animals. This difference can be used to detect the contribution of the arthropod rain to detrital food webs. Low average δ13C and δ15N values of the arthropod rain were primarily driven by the presence of wingless microhytophages, represented mainly by Collembola and Psocoptera, and macrophytophages, mainly aphids, caterpillars, and heteropterans. Winged arthropods were enriched in heavy isotopes relative to wingless specimens, being similar in the isotopic composition to soil-dwelling invertebrates. Moreover, there was no consistent difference in δ13C and δ15N values between saprophages and predators among winged insects, suggesting that winged insects in the arthropod rain represented a random assemblage of specimens originating in different biotopes, and are tightly linked to soil food webs.
www.nature.com/scientificreports/ to leaf litter 16,17 , while in the grazing food webs, the difference in δ 13 C values between green plants and phytophages typically does not exceed 1‰ 18,19 . Furthermore, many microarthropods inhabiting the canopy are likely microphytophages feeding on algae, lichens, and mosses 20 . Non-vascular plants and epiphytes, in general, typically have low δ 15 N values due to the assimilation of 15 N-depleted compounds from wet atmospheric deposits [21][22][23] . Consequently, arthropods trophically linked to non-vascular plants are depleted in 15 N relative to litter 24 . This depletion should result in the relatively low δ 15 N values in the arthropod rain. In contrast, most soil animals are strongly enriched in 15 N compared to phytophages due to the accumulation of heavy nitrogen in microbial biomass at basal levels of detrital food webs 16 . Overall, due to the prevalence of macrophytophages and microphytophages in the arthropod rain, it can be expected to be depleted by 2-3‰ in 13 C and 15 N content relative to the animals belonging to detrital food webs in soil.
Although a considerable difference in resource base (and consequently in the isotopic composition) of the crown fauna and soil-dwelling species is expected, some members of the arthropod rain would, in fact, belong to detrital food webs. First of all, these are winged insects that could spend their early life stages or feed in the soil 25 . Second, besides winged insects, actively moving (climbing) specimens of wingless macrofauna (mostly predators like harvestmen, spiders, ants) move freely between the litter and the tree canopy connecting belowground and aboveground food webs 1,8 . Finally, detritus and small-scale detrital food webs can be quite abundant in the canopy, supporting a relatively rich fauna of typical microbivores and detritivores such as Oribatida or Collembola 26,27 .
Thus, the study of the isotopic composition of the arthropod rain would contribute to elucidating trophic relationships of its constituent invertebrates. Furthermore, it would allow us to assess the possibility of evaluating the contribution of arthropod rain to the nutrition of soil invertebrates.
In this study, we estimated the stable isotope composition of invertebrates forming arthropod rain in a temperate forest. Our main goal was to compare δ 15 N and δ 13 C values of the arthropod rain with those of soil-and litter-dwelling invertebrates. We hypothesized that (1) invertebrates forming arthropod rain are depleted in 13 C and 15 N compared to the soil-dwelling animals. We further proposed that (2) winged insects in the arthropod rain have on average higher δ 13 C and δ 15 N values than wingless invertebrates because the former are likely to have tight trophic connections with soil and detrital food webs.
Results
The most numerous taxa in the arthropod rain were Collembola and Acari. Collembola were mainly represented by Entomobryidae, Sminthuridae, Dicyrtomidae, and Poduromorpha. Mites were mainly Trombidiformes, Gamasina, Astigmatina, and Oribatida. The most species-rich Insecta orders were Coleoptera and Diptera, forming the main part of the winged specimens. Numerous fly larvae were represented by 18 families, Mycetophilidae, Sciaridae, and Cecidomyiidae being most frequent. Coleoptera were represented by 29 families, and the most numerous were Staphylinidae, Ptiliidae, and Lathridiidae. Numerous aphids, some heteropterans, and relatively rare lepidopteran larvae were the most characteristic representatives of macrophytophages. Hymenoptera were represented mainly by parasitoid Mymaridae, Ceraphronidae, Diapriidae, and the rare wingless Formicidae. Spiders were represented by web-builders and striders; most species typically inhabit vegetation (Araneidae, Theridiidae, and Thomisidae including Enoplognatha ovata Clerck, Xysticus sp. and Philodromus sp.), but some litter-dwelling species (Ceratinella brevis Wider, Linyphiidae; Ozyptila praticola C. L. Koch, Thomisidae) were also present. Opiliones were not abundant, but accounted for a large proportion of the mass and were represented by Mitopus morio Fabricius and Phalangium opilio L. Data on the seasonal fluctuations in arthropod rain intensity are given in Rozanova et al. 12 .
Isotopic analysis of the arthropod rain revealed a large range of both δ 13 C and δ 15 N values. Mean isotopic composition of the main taxonomic groups and life stages of the arthropods, along with exuviae, frass, and various plant materials, are given in Table S1. The total ranges of litter-normalized Δ 13 C and Δ 15 N values of individual animals forming the arthropod rain (n = 730) were 13.7‰ (from − 3.4‰ in imago Anisopodidae, Diptera to 10.3‰ in imago Staphylinidae, Coleoptera) and 26.2‰ (from − 7.2‰ in Psocoptera to 18.7‰ in imago Figitidae, Hymenoptera), respectively. The Δ 13 C and Δ 15 N values of excrements (frass) and exuviae were within the range of arthropod δ values (Fig. S2).
The previously reported ranges of mean litter-normalized Δ 13 C and Δ 15 N values of soil invertebrates in temperate forests (n = 1300 16 ) were similar to those of the arthropod rain. In the isotopic bi-plot ( Fig. 1) nearly all soil invertebrates are found within the convex hull formed by the arthropod rain. Standard ellipses of the arthropod rain and soil invertebrates largely overlapped, but the arthropod rain ellipse was shifted towards lower Δ 13 C and Δ 15 N values. Therefore, mean Δ 13 C and Δ 15 N values of soil invertebrates (4.0 ± 0.1‰ and 4.2 ± 0.1‰, respectively) were significantly higher than those of the arthropod rain (2.4 ± 0.1‰ and 1.5 ± 0.1‰, respectively) (Mann-Whitney test, P < 0.05). Due to the contribution of large specimens of spiders and especially harvestmen, weighted mean Δ 13 C and Δ 15 N values (according to the Eq. (3)) of the total arthropod rain were somewhat higher (2.7‰ and 2.4‰, respectively) ( Table 1).
As a rule, individual taxonomic groups forming arthropod rain had Δ 13 C and Δ 15 N values lower than those of animals collected from soil and litter. Significant differences both in Δ 13 C and Δ 15 N values between specimens originating from the arthropod rain (original data) and the soil (published data) were found in Diptera, Collembola, and Araneae (Fig. 2). Coleoptera from the arthropod rain were significantly depleted in 15 N but enriched in 13 C compared to soil-dwelling animals.
Discussion
Arthropod rain sampled in two biotopes in a temperate mixed forest throughout vegetation season reflected a great taxonomic and functional diversity of crown fauna and air plankton. Seasonal changes in the abundance and taxonomic composition of the arthropod rain were reported elsewhere 12 . The stable isotope composition of the arthropod rain (730 samples) was compared to a large reference dataset of the isotopic composition of soil animals from temperate forests compiled in Potapov et al. 16 (1300 samples). Both datasets contained litternormalized Δ 13 C and Δ 15 N values, allowing a direct comparison of data collected in different biotopes 28 . As a note of caution, it should be stressed that sizes of the standard ellipses reflecting "isotopic space" of soil animals and arthropod rain (Fig. 1) cannot be compared directly, as they were based on species means and individual measurements, respectively. Nevertheless, their centroids can be accurately compared. This accuracy is confirmed www.nature.com/scientificreports/ by a close similarity in the isotopic signatures of soil macro-and mesofauna collected in this study and those represented in the reference dataset (Fig. S1). Consistent with our first hypothesis, invertebrates forming arthropod rain were on average depleted in 13 C and 15 N compared to the soil-dwelling animals. Preservation of the arthropod rain invertebrates in 75% alcohol could not affect this conclusion since the expected change in 13 C content due to leaching of lipids 29 would increase, rather than decrease, the δ 13 C value of the arthropod rain.
The overall depletion of the arthropod rain in 13 C was mainly driven by the presence of a significant proportion of macro-and microphytophages with relatively low δ 13 C values 15,30 . Furthermore, there was a clear difference between microphytophages and macrophytophages in δ 15 N values (Fig. 3), consistent with the difference in isotopic signatures of their basic trophic resources: non-vascular plants, such as algae and lichens, and fresh leaves, respectively. Indeed, the difference in Δ 15 N values between micro-and macrophytophages roughly corresponded to the difference between crown lichens and fresh leaf litter (Table S1). These data corroborate previous reports on the importance of non-vascular plants in forest food webs 20 .
Microphytophages depleted in 15 N were represented mainly by Psocoptera and Collembola (Table S1). Psocoptera grazing on epiphytes are typical components of crown fauna 31 , while Collembola are usually regarded as typical soil animals feeding predominantly on fungi. Nevertheless, feeding of Collembola on 15 N-depleted lower plants has been repeatedly noted. According to Potapov et al. 24 , at least 20% of Collembola species in temperate forest soils are depleted in 15 N relative to litter, suggesting they are trophically linked to non-vascular plants, predominantly algae 20,32 . Thus, even in the soil, there are many phycophagous Collembola, but in the crowns, microphytophagy is apparently more widespread, as suggested by significantly lower δ 15 N and δ 13 C values in the Collembola from the arthropod rain than in soil-dwelling Collembola (Fig. 2).
Among other groups of arthropods well represented in both datasets, Diptera and Araneae were depleted in 13 C compared to soil-dwelling animals. This observation further confirms that the "detrital shift, " i.e., enrichment of detrital food webs with 13 C due to interactions with saprotrophic microorganisms (see Potapov et al. 16 and references therein) can be traced both in micro-and macroarthropods and also at higher trophic levels. Coleopterans did not follow this pattern (Fig. 2) likely because they were represented mainly by winged imagoes trophically linked to detrital food webs (Table S1).
Dead stems and branches, bark crevices, suspended litter and soil support a substantial amount of detritus in the crown space, which in turn harbors rich fauna of detritophagous arthropods 33,34 . Thus, the detrital shift can be expected and was observed in the canopy food webs 35 . Nevertheless, the isotopic signature of non-winged specimens, which presumably fed in the crowns, suggests that the effect of the detrital shift in crown fauna was considerably less pronounced than in the soil food webs (Fig. 4a). Furthermore, soil-dwelling taxa associated with mineral soil that are the most enriched in 13 C and 15 N, such as earthworms and euedaphic Collembola among saprophages, or gamasid mites and geophilid centipedes among predators 16,36 , were rare or absent in our samples of the arthropod rain.
On the other hand, a large range of δ 13 C values in macrophytophages (ca. 8‰, Fig. 3b) can be related to the "canopy effect, " i.e., a gradient in the concentration of 13 C in green leaves growing at different heights 37 . Therefore, phytophages that consumed green parts of vascular plants at different canopy heights could differ greatly in isotopic carbon composition.
As suggested by our second hypothesis, decreased δ 13 C and δ 15 N values were typical of wingless arthropods, while winged insects collected in the traps hardly differed in the isotopic composition from soil animals ( Table 1, Fig. 4b). Another important feature of winged insects was the lack of difference between predators and phytophages or microbi/saprophages, while in the wingless arthropods, this difference was pronounced (Fig. 4). This observation confirms that winged insects collected in the traps represented a random assemblage of specimens originating in different biotopes or local ecosystems. Nevertheless, isotopic signatures of the winged insects suggest that they mostly originated from the soil. This localization is especially true for Diptera and Coleoptera (Table S1) that often have litter-dwelling larvae 25 . Thus, exploring the descending gravity-driven flow 11 of arthropod rain, we found evidence of the ascending flow of the nutrients and energy from the soil to the crown layer.
The flux of arthropods falling from the crown space in temperate forests can be quite large. According to our calculations, its intensity is approximately 20 mg dry weight m −2 day −1 and can be comparable to the total food requirement of soil-dwelling spiders 12 . A significant proportion of the arthropod rain biomass (up to 40% in certain months) consists of small and slow-moving arthropods (such as Psocoptera, Aphidoidea, and Collembola), which can be easy prey for predators. Furthermore, approximately a third of arthropod rains consist of dead animals or their fragments that decomposers can consume. One of the objectives of this study was to assess the possibility of evaluating the contribution of arthropod rain to the nutrition of soil invertebrates using stable isotope analysis. The biomass-weighted mean values of Δ 13 C and Δ 15 N of the arthropod rain were approximately 1.3 and 1.8‰ lower, respectively, than the mean Δ 13 C and Δ 15 N values of soil animals. Even smaller differences have been used to identify energy pathways in detrital food webs 38,39 . However, soil food webs contain numerous mycrophytophages, e.g. Collembola, strongly depleted in 13 C and 15 N 20,24 , while the difference in the isotopic composition between soil animals and arthropod rain is likely not consistent in different forest types. In particular, it was less pronounced in a monsoon tropical forest (Rozanova et al., unpublished data). Thus the possibility of using isotopic composition of the arthropod rain to quantify its dietary inputs into soil food web remains questionable.
Overall, our data suggest that invertebrates falling from the crown space and flying arthropods originating from the soil are an important channel connecting food webs in the crown and the soil. Due to the large contribution of micro-and macrophytophages, the fraction of the arthropod rain consisting of wingless specimens differs considerably in δ 13 C and δ 15 N values from soil invertebrates belonging to detrital food webs. www.nature.com/scientificreports/
Methods
Study site and sampling. The study was conducted in two forested plots near Malinky Biological Station (Moscow region, Russia, 55°27′42″ N, 37°11′10″ E) as described in Rozanova et al. 12 . The first plot was located in a mixed forest with spruce (Picea abies L.) and lime (Tilia cordata Mill.) forming the upper canopy. The second plot was situated nearby in a ca. 50-year-old pure P. abies plantation. The arthropod sampling was conducted using six custom-made traps in each of the two plots. The traps were open for 24 h once every two weeks (± 3 days) throughout the growing season from May to October 2017 (12 samplings in total). Further details are given in Rozanova et al. 12 . Collected arthropods were preserved in 75% ethanol and subsequently identified to the order or family level. In addition to arthropods, fresh leaf litter and pollen were collected from the traps. Other sampled substrates included soil (upper 5 cm) and mixed leaf litter from the soil surface. Lichens, bark, tree branches, green leaves, and needles were sampled at different heights of the canopy trees (from 1 to 20 m) in July 2017. Isotopic composition and the number of replications for these substrates are given in Table S1. Five samples of soil and litter (25 × 25 cm, 10 cm deep) were taken in each study plot, and soil macrofauna was extracted by hand-sorting. These soil-dwelling animals were subsequently used for validating the reference dataset (see below).
The current study was conducted in accordance with guidelines of collecting biological materials for scientific purposes (Federal Law #200 of 04/12/2006). The research did not involve rare or endangered species of plants or animals. The collection of plant material complied with relevant institutional, national, and international guidelines and legislation. The appropriate permissions for collection of plant specimens were obtained for the study.
Stable isotope analysis. All materials were dried at 50 °C for at least 72 h. Identified animals were weighed (dry wt.) individually or in batches of several conspecifics using a Mettler Toledo MX5 microbalance with 2 μg accuracy. For the isotope analysis of macrofauna, legs and/or head capsules of large arthropods were used 40 . Small animals were analyzed alone or as a group of several individuals from the same taxonomic group (minimum sample weight was approximately 50 μg). Soil and plant materials were ground to powder using an MM200 ball mill (Retsch, Germany). Stable isotope analyses were performed using a Flash 1112 Elemental Analyzer (Thermo Fisher, USA) and a Thermo Delta V Plus isotope ratio mass spectrometer in the Joint Usage Center "Instrumental Methods in Ecology" at the A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow. The carbon and nitrogen isotopic compositions were measured as deviations from the international standards (Vienna Pee Dee belemnite and atmospheric N 2 , respectively) and expressed in conventional δ values (‰): where X is the element of interest (carbon or nitrogen), and R is the molar ratio of its heavy and light isotopes. The standard deviations of δ 15 N and δ 13 C values in laboratory standards were < 0.15‰.
Preservation of sampled animals in 75% ethanol could affect their isotopic composition. In particular, the δ 13 C values could slightly increase due to the washing out of 13 C-depleted lipids. However, the effect of ethanol preservation is typically small and does not exceed 1‰ 41,42 . We, therefore, did not apply any correction related to the preservation of samples. Data analysis. The stable isotope composition of arthropods and litter collected at two experimental plots did not differ. Furthermore, we did not detect significant changes in the isotopic composition of animals during the growing season (data not shown). All materials collected were therefore analyzed together. Local δ 13 C and δ 15 N values of plant litter are typically used as a baseline in isotopic studies of soil-dwelling invertebrates 16,28 . To compare our results with published data, the isotopic composition of nitrogen and carbon of arthropods was therefore normalized using δ 13 C and δ 15 N values of fresh plant litter collected in the traps (δ 13 C − 28.9 ± 0.1‰, δ 15 N − 0.4 ± 0.1‰, n = 35; Table S1): Dry-mass weighted mean δ 13 C and δ 15 N values of the arthropod rain were calculated using the following equation: where w i = m i n i=1 m i , i.e., mass proportion of the individual samples in a total flux, m i -dry mass of i-th of n samples from a group of the arthropod rain and δX i -isotope signatures (δ 13 C or δ 15 N) of the individual sample.
The trophic position of individual taxonomic groups of arthropods was derived based on the known morphological and behavioral traits 43 and isotopic studies 16 . Some groups of arthropods that have fallen into our traps can move between the forest canopy and the litter layer. These are winged animals (the imago stage of various insects, marked in Table S1) and actively moving wingless predators, such as ants, harvestmen, and spiders. We, therefore, separated winged animals and predators from other arthropods.
We compared the isotopic composition of the arthropod rain (original data, n = 730) to that of soil invertebrates using the results of 23 studies performed in temperate forests compiled in Potapov et al. 16 (published results, n = 1300). Although the dataset used for comparison 16 contains averaged values rather than individual measurements, it fully reflects data on the isotopic composition of soil and litter-dwelling mesofauna and (1) δX = R sample /R standard − 1 * 1000, (2) X normalized = δX arthropod − δX litter www.nature.com/scientificreports/ macrofauna in the vicinity of the Malinky Biological Station (Fig. S1). In addition, we compared the isotopic composition of high-rank taxonomic groups that were well represented (more than 30 measurements) in the original and published datasets (Diptera, Coleoptera, Collembola, and Araneae). The isotopic composition of arthropod rain and soil-dwelling invertebrates was compared using standard ellipses limiting the 95% confidence interval 44 . The area and the overlap of the ellipses were calculated in the SIBER package, and violin plots (mirrored kernel density estimation) were produced in the ggplot2 package in R 45 . Central tendencies are presented as means ± 1 SE. Pairwise comparisons were performed in STATISTICA 10 (StatSoft, Tulsa, USA) using Mann-Whitney U test. P < 0.05 was considered statistically significant.
Data availability
Should the study be accepted for publication, the original dataset will be placed in an open repository (Figshare. com or similar). | 2022-01-11T14:35:30.386Z | 2022-01-10T00:00:00.000 | {
"year": 2022,
"sha1": "859adb0a253da269f79e14106702c6f3532a19fa",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-03893-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86da2ad7b9c9132a81677e8947c373b9d28fdb4c",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67807612 | pes2o/s2orc | v3-fos-license | Reliability Analysis of Random Fuzzy Unrepairable Systems
The lifetimes of components in unrepairable systems are considered as random fuzzy variables since randomness and fuzziness are often merged with each other. Then we establish the fundamental mathematical models of random fuzzy unrepairable systems, including series systems, parallel systems, series-parallel systems, parallel-series systems, and cold standby systems with absolutely reliable conversion switches. Furthermore, the expressions of reliability and mean time to failure (MTTF) are given for the above five random fuzzy unrepairable systems, respectively. Finally, numerical examples are given to show the application in a lighting lamp system and a hi-fi system.
Introduction
The conventional reliability theory has been successfully used for solving various reliability problems, in which the lifetimes of systems are assumed to be random variables and the system behavior can be fully characterized by probability theory.It is well-known that reliability and mean time to failure (MTTF) are good evaluations in unrepairable systems, where the reliability is defined by the probability of the random event "system is functioning at time " and MTTF is the expected value of random lifetime of the system.The results on classical reliability theory can be referred to studies such as Barlow and Proschan [1], Dhillon and Singh [2], Epstein and Sobel [3], Gnedenko et al. [4], Kaufmann [5], Kapur and Lamberson [6], Natarajan [7], Ross [8], Sharma et al. [9], Lin and Yeh [10], Tian et al. [11], Marquez et al. [12], and Hsu et al. [13].
Although the traditional reliability theory has been proved to be effective in many cases, using probability method in engineering problems needs to have three basic premises: firstly, the events should be clearly defined; secondly, there should exist a large number of samples; and thirdly, the samples should have the probability of repetition.If the three premises do not hold, using probability theory to deal with the reliability problems has certain limitations.So fuzzy theory has been introduced to reliability theory by several authors.In 1975, Kaufmann [5] first used fuzzy theory in reliability engineering.Chowdhury and Misra [14] presented a method to find an expression of fuzzy system reliability of a nonseries parallel network taking into consideration the special requirements of fuzzy sets.Cai et al. [15][16][17][18] introduced various forms of fuzzy reliability theories, including profust reliability theory, posbist reliability theory, and posfust reliability theory.Their studies can be considered by taking new assumptions, such as the possibility assumption or the fuzzystate assumption, in place of the probability assumption or the binary-state assumption.Utkin [19,20] discussed the fuzzy system reliability based on the binary-state assumption and possibility assumption and considered the fuzzy availability and unavailability and the fuzzy operative availability and unavailability.Utkin and Gurov [21] proposed a general approach on the basis of a system of functional equations according to Cai's theory.In Praba et al. [22], a new method for finding fuzzy system reliability using posfust reliability theory was demonstrated, where the system was modelled as a unified fuzzy Markov model.Cooman [23] introduced the notion of possibilistic structure function based on the concept of the classical two-valued structure function and studied the possibilistic uncertainty of the states of a system and its components.Huang [24] developed the fundamental calculation formulas of fuzzy reliability and established the fuzzy reliability models of unrepairable systems.Huang et al. [25] proposed a new method to determine the membership function of the estimates of the parameters and the reliability function of multiparameter lifetime distributions.In Liu et al. [26], reliability and performance assessment for fuzzy multistate elements were considered.Ding and Lisnianski [27] considered a multistate system where performance rates and corresponding state probabilities were presented as fuzzy values.Recently, Jiang and Chen [28] developed a computational model of fuzzy reliability focusing on solving the engineering problems with random general stress and fuzzy general strength.Zhang et al. [29] considered a fuzzy age-dependent replacement policy, in which the lifetimes of components were treated as fuzzy variables.Linda and Manic [30] considered interval type-2 fuzzy voter design for fault tolerant systems.
A more general case in practice is that randomness and fuzziness are merged with each other in one unrepairable system.Many researchers have paid attention to these problems.Wang and Watada [31] considered a renewal reward process with fuzzy random interarrival times and rewards under the -independence associated with any continuous Archimedean -norm.Based on using fuzzy random variables to characterize the lifetimes, Wang and Watada [32] studied the redundancy allocation problems to a fuzzy random parallel-series system.Adduri and Penmetsa [33] made system reliability analysis for mixed uncertain variables which contained both probability distributions and fuzzy membership functions.Utkin and Coolen [34] gave an overview of a lot of methods and models for reliability problems mixed with randomness and fuzziness.Utkin et al. [35] studied a simple one-unit system description in the probability and possibility contexts.According to the situation of randomness and fuzziness existing in the actual project, Li et al. [36] proposed a reliability-credibility model based on fuzzy theory, possibility theory, and credibility theory.Liu et al. [37] considered the fuzzy random reliability of structures based on fuzzy random variables.Random fuzzy theory proposed by Liu [38] mainly uses the average chance measure to evaluate the random fuzzy events.Although many measures proposed by researchers have been used to deal with the behavior of random fuzzy phenomena, they have no self-duality properties.However, a self-duality measure is absolutely needed in both theory and practice.Until today, few people have used random fuzzy theory as the basic mathematical tool to deal with reliability problems.For example, Zhao et al. [39] used random fuzzy theory into renewal process, which results were very useful in repairable system theory.Zhao and Liu [40] provided three types of system performances, in which the lifetimes of redundant systems were treated as random fuzzy variables.Since the important figures of merit for repairable systems were the limited availability, steady state failure frequency, mean time between failures, and mean time to repair, Liu et al. [41] gave the reliability analysis of a random fuzzy repairable series system with independent components.In most cases, the components in the system were dependent.So Liu et al. [42] considered two dependent components, established a random fuzzy shock model and a random fuzzy fatal shock model, and studied the bivariate random fuzzy exponential distribution.
The topic of unrepairable system is an important content in system reliability theory.There are many reasons cannot be repaired, some because of technical reasons, cannot repair; some because of economic reasons, not worth to repair; and some because of making repairable system simplification.In this paper, random fuzzy variables are employed to represent uncertain lifetimes of components in the unrepairable systems.We establish the fundamental mathematical models of random fuzzy unrepairable systems, including series systems, parallel systems, series-parallel systems, parallelseries systems, and cold standby systems with absolutely reliable conversion switches.Furthermore, the expressions of reliability and MTTF are given for the above five systems, respectively.The expressions of reliability and MTTF of the random fuzzy unrepairable systems we arrived at are suitable for stochastic cases and fuzzy cases, which shows that the reliability mathematical models and results in this paper generalize the traditional reliability theory.
The rest of this paper is organized as follows.In Section 2, we recall some basic concepts on fuzzy variables and random fuzzy variables.In Section 3, we establish the fundamental mathematical models of random fuzzy unrepairable systems and give the expressions of reliability and MTTF for each system.Some examples are also presented to illustrate how to calculate the reliability and MTTF of given unrepairable systems, in which the lifetimes of components follow certain probability distributions with fuzzy parameters.In Section 4, the application in a lighting lamp system and a hi-fi system is presented.
Fuzzy Variables and Random Fuzzy Variables
In this section, we first introduce some basic concepts of fuzzy variables based on the credibility measure.Definition 2 (Liu [44]).A fuzzy variable is defined as a function from the credibility space (Θ, P(Θ), Cr) to the set of real numbers.
Definition 5 (Liu [45]).Let be a fuzzy variable and ∈ (0, 1].Then are called the -pessimistic value and the -optimistic value of , respectively. Definition 6 (B.Liu and Y.-K.Liu [43]).Let be a fuzzy variable.The expected value [] is defined as provided that at least one of the two integrals is finite.In particular, if is a positive fuzzy variable, then Proposition 7 (Y.-K.Liu and B. Liu [46]).Let be a fuzzy variable with finite expected value []; then one has where and are the -pessimistic value and the optimistic value of , respectively.Definition 8 (Liu [44]).The fuzzy variables 1 , 2 , . . ., are said to be independent if The concept of the random fuzzy variable was given by Liu [45].Let (Ω, A, Pr) be a probability space and F a collection of random variables.A random fuzzy variable is defined as a function from a credibility space (Θ, P(Θ), Cr) to a collection of random variables F.
Example 11.A random fuzzy variable is said to be exponential if for each , () is an exponentially distributed random variable whose density function is defined as where is a positive fuzzy variable defined on the space Θ.An exponentially distributed random fuzzy variable is denoted by ∼ EXP(), and the fuzziness of random fuzzy variable is said to be characterized by fuzzy variable .It follows from Proposition 10 that Pr{() ≥ } and [()] are fuzzy variables.We can arrive at Pr{() ≥ } = exp(−()) and [()] = 1/().
Then the expected value [] is defined by provided that at least one of the two integrals is finite.In particular, if is a positive fuzzy variable, then Definition 13 (Y.-K.Liu and B. Liu [47]).Let be a random fuzzy variable.Then the average chance, denoted by Ch, of random fuzzy event characterized by { ∈ B} is defined as Remark 14.If degenerates to a random variable, then the average chance degenerates to Pr{ ∈ B}, which is just the probability of random event.If degenerates to a fuzzy variable, then the average chance degenerates to Cr{ ∈ B}, which is just the credibility of fuzzy event.
Finally, we refer to a definition on the stochastic ordering which is usually employed in the comparison of the lifetimes of systems.
Definition 15 (Ross [48]).A collection of random variables F is said to be a totally ordered set with stochastic ordering if and only if, for any given 1 , 2 ∈ F, and ∈ R, either or Remark 16 (Ross [48]).For any given 1 , 2 ∈ F, we have
Random Fuzzy Unrepairable Systems
In this section, we first define the reliability and MTTF of random fuzzy unrepairable systems.Then the reliability and MTTF of random fuzzy series systems, parallel systems, series-parallel systems, parallel-series systems, and cold standby systems are discussed, respectively.
Definition 17.Let be the random fuzzy lifetime of an unrepairable system, which is defined on the credibility space (Θ, P(Θ), Cr); then the reliability of the unrepairable system is defined by Definition 18.Let be the random fuzzy lifetime of an unrepairable system, which is defined on the credibility space (Θ, P(Θ), Cr); then MTTF of the unrepairable system is defined by
Remark 20.If , = 1, 2, . . ., , degenerate to random variables, the result in Theorem 19 degenerates to the form which is consistent with the result in stochastic case (see Barlow and Proschan [1]).
Remark 21.If , = 1, 2, . . ., , degenerate to fuzzy variables, the result in Theorem 19 degenerates to the form which is consistent with the result in fuzzy case (see Liu and Zhu [49]).
Proof.By Definition 18 and Proposition 7, we have It follows from ( 23), (29), and (32) that The theorem is proved.
in which () is the reliability of series system in stochastic case.
Remark 24.If , = 1, 2, . . ., , degenerate to fuzzy variables, the result in Theorem 22 degenerates to the form in which () is the reliability of series system in fuzzy case.
By Theorems 19 and 22, we have in which "" is the expected value operator of fuzzy variable.
Reliability Analysis of Random Fuzzy Unrepairable Parallel
Systems.Consider a parallel system composed of independent components.Let be the lifetime of component , which is a random fuzzy variable on the credibility space (Θ , P(Θ ), Cr ), = 1, 2, . . ., .Obviously, the lifetime of the parallel system is = max{ 1 , 2 , . . ., }, which is a random fuzzy variable on the product credibility space (Θ, P(Θ), Cr), where Proof.By Definitions 13 and 17 and Proposition 7, we have Let It is easy to see that that is, On the other hand, by (19), we have By ( 46) and ( 47), we have By ( 39) and ( 48) we have which completes the proof.
Remark 27.If , = 1, 2, . . ., , degenerate to random variables, the result in Theorem 26 degenerates to the form which is consistent with the result in stochastic case (see Barlow and Proschan [1]).
Remark 28.If , = 1, 2, . . ., , degenerate to fuzzy variables, the result in Theorem 26 degenerates to the form which is consistent with the result in fuzzy case (see Liu and Zhu [49]).
Remark 30.If , = 1, 2, . . ., , degenerate to random variables, the result in Theorem 29 degenerates to the form in which () is the reliability of parallel system in stochastic case.
Remark 31.If , = 1, 2, . . ., , degenerate to fuzzy variables, the result in Theorem 29 degenerates to the form in which () is the reliability of parallel system in fuzzy case.
Reliability Analysis of Random Fuzzy Unrepairable
Series-Parallel Systems.Consider a series-parallel system which is a series system of subsystems; each subsystem is composed of parallel components.Let be the lifetime of component in th subsystem, which is a random fuzzy variable on the credibility space (Θ , P(Θ ), Cr ), = 1, 2, . . ., , = 1, 2, . . ., .We assume the components are mutually independent.It is easy to know that the lifetime of the seriesparallel system is = min 1≤≤ (max 1≤≤ ), which is a random fuzzy variable on the product credibility space ) . (62) Remark 34.If , = 1, 2, . . ., , = 1, 2, . . ., , degenerate to random variables, the result in Theorem 33 degenerates to the form which is consistent with the result in stochastic case (see Barlow and Proschan [1]).
in which () is the reliability of series-parallel system in fuzzy case.
Reliability Analysis of Random Fuzzy Unrepairable
Parallel-Series Systems.Consider a parallel-series system which is a parallel system of subsystems; each subsystem is composed of series components.Let be the lifetime of component in th subsystem, which is a random fuzzy variable on the credibility space (Θ , P(Θ ), Cr ), = 1, 2, . . ., , = 1, 2, . . ., .We assume the components are mutually independent.It is easy to know that the lifetime of the parallel-series system is = max 1≤≤ (min 1≤≤ ), which is a random fuzzy variable on the product credibility space (Θ, P(Θ), Cr), where Θ = Θ On the other hand, by (83) and Definition 15, we have Since , are arbitrary points in , = 1, 2, . . ., , we have It follows from ( 86) and (88) that The theorem is proved.
in which () is the reliability of cold standby system in fuzzy case.
Numerical Examples
In this section, we give the reliability analysis of a lighting lamp system and a hi-fi system with random fuzzy lifetimes.
Conclusion
In this paper, the random fuzzy theory provides a mathematical foundation for the reliability theory, which makes it possible to solve more complex unrepairable systems with fuzziness and randomness.Based on that, we establish five basic mathematical models of random fuzzy unrepairable systems, including series systems, parallel systems, seriesparallel systems, parallel-series systems, and cold standby systems with absolutely reliable conversion switches.Furthermore, the expressions of reliability and MTTF are given for the above five random fuzzy unrepairable systems, respectively.When the random fuzzy lifetimes degenerate to random lifetimes or fuzzy lifetimes, the results we arrived at are also suitable.In future research, continuous attention might be paid to random fuzzy systems, and we should give reliability analysis or discuss the maintenance policy of repairable systems. | 2018-12-22T23:34:35.036Z | 2014-06-02T00:00:00.000 | {
"year": 2014,
"sha1": "99a2edd2dec3550ee4c88d28d5f13ba8140a9bcf",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ddns/2014/625985.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "99a2edd2dec3550ee4c88d28d5f13ba8140a9bcf",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
202661184 | pes2o/s2orc | v3-fos-license | Specification and Optimal Reactive Synthesis of Run-time Enforcement Shields
A system with sporadic errors (SSE) is a controller which produces high quality output but it may occasionally violate a critical requirement REQ(I,O). A run-time enforcement shield is a controller which takes (I,O) (coming from SSE) as its input, and it produces a corrected output O' which guarantees the invariance of requirement REQ(I,O'). Moreover, the output sequence O' must deviate from O"as little as possible"to maintain the quality. In this paper, we give a method for logical specification of shields using formulas of logic Quantified Discrete Duration Calculus (QDDC). The specification consists of a correctness requirement REQ as well as a hard deviation constraint HDC which must both be mandatorily and invariantly satisfied by the shield. Moreover, we also use quantitative optimization to give a shield which minimizes the expected value of cumulative deviation in an H-optimal fashion. We show how tool DCSynth implementing soft requirement guided synthesis can be used for automatic synthesis of shields from a given specification. Next, we give logical formulas specifying several notions of shields including the k-Stabilizing shield of Bloem"et al."as well as the Burst-error shield of Wu"et al.", and a new e,d-shield. Shields can be automatically synthesized for all these specifications using the tool DCSynth. We give experimental results showing the performance of our shield synthesis tool in relation to previous work. We also compare the performance of the shields synthesized under diverse hard deviation constraints in terms of their expected deviation and the worst case burst-deviation latency.
Introduction
A system with sporadic errors (SSE) is a controller which produces high quality desirable output for any given input but it may sporadically violate a critical system requirement REQ (I, O), where I and O are the set of input and output propositions. Many manually designed controllers have this character, as they embody designer's unspecified optimizations, however they may have obscure design errors. A runtime enforcement shield for a specified critical requirement REQ(I, O) is a controller (Mealy machine) which receives both input and output (I, O) generated by SSE. The shield produces a modified output O ′ which is guaranteed to invariantly meet the critical requirement REQ(I, O ′ ) (correct-by-construction). Moreover, in each run, the shield output O ′ must deviate from the SSE output O "as little as possible", to maintain the quality. This allows the shield to benefit from system designer's optimizations without having to formally specify these or to handle these in the synthesis. See Figure 2.
A central issue in designing run-time enforcement shields is the underlying notion of "deviating as little as possible" from the SSE output. There are several different notions explored in the literature [2,9,21,20]. In their pioneering paper, Bloem et al. [2] proposed the notion of k-stabilizing shield which may deviate for at most k cycles continuously under suitable assumptions. If assumptions are not met the shield may deviate arbitrarily. This was proposed as a hard requirement which must be mandatorily satisfied by the shield in any behaviour. We call such constraints as hard deviation constraints.
Konighofer et al [9] have proposed some variants of the k-stabilizing shield requirement with and without fail safe state, which are also hard deviation constraints. Specific shield synthesis algorithms have been developed for each of these constraints.
As our first main contribution, we propose a logical specification notation for hard deviation constraints using the formulas of an interval temporal logic QDDC. This logic allows us to succinctly and modularly specify regular properties [13,11,12]. With its counting constructs and interval based modalities, it can be used to conveniently specify both the correctness requirement REQ(I, O) as well as the hard deviation constraint HDC.
Criticizing the inability of k-stabilizing shields in handling burst errors, Wu et al. [21,20] proposed a burst-error shield which enforces the invariance of the correctness requirement, and it locally minimizes the measure of deviation between SSE output O and the shield output O ′ , at each step. An algorithm for the synthesis of such shields was given. We call such a shield as locally deviation minimizing.
In this paper, as our second main contribution, we generalize the Wu technique to minimize the cumulative deviation more globally. An H-optimal shield which minimizes at each point the expected value of cumulative deviation in next H-steps of shield execution is computed. The cumulative deviation is averaged over all possible H length inputs to arrive at the optimal estimate. A well known value iteration algorithm [1,16] for optimal policy synthesis of Markov Decision Processes allows us to compute such a shield. We call such a shield as H-optimally deviation minimizing. This is a powerful optimization and in the paper we experimentally show its significant impact on performance of the shield. It may be noted that Wu's burst-error shield is obtained by selecting H = 0.
Finally, we propose a uniform method for synthesizing a run-time enforcement shield from given logical specification (REQ, HDC) and a horizon value (natural number) H. The resulting shield invariantly meets the correctness requirement REQ as well as the hard deviation constraint HDC. Moreover, the shield is H-optimally deviation minimizing. The shield synthesis is carried out by using the soft requirement guided controller synthesis tool DCSynth [18]. This tool allows synthesis of H-optimal controllers from specified hard and soft QDDC requirements.
Using the proposed formalism, in the paper, we formulate several diverse notions of shields. These include a logical specification of Bloem's k-stabilizing shield and Wu's burst-error shield, as well as a new notion of e, d-shield. A uniform synthesis method using the tool DCSynth can be applied to obtain the corresponding run-time enforcement shields. It is notable that tool DCSynth uses an efficient BDD-based semi-symbolic representation of automata/controllers with aggressive minimization. This allows the tool to scale better and to produce smaller sized shields. In the paper, we give an experimental evaluation of the performance of our DCSynth tool and compare it with some previously reported studies in the literature.
With the ability to formulate shields with diverse hard deviation constraints, it is natural to ask for a comparison of the performance of these shields. The performance must essentially measure the extent of deviation of the shield output from the SSE output. Towards this, we propose two measures of the shield performance.
• We compute the probability of deviation in long run. For this, we assume that the input to the shield is fully random, with each input variable value chosen independently of the past and each other. While simplistic, this does provide some indication of the shield's effectiveness in average.
• We measure the worst case burst-deviation latency. This gives the maximum number of consecutive deviations possible in the worst case. (If unbounded, we report ∞). A model checking technique implemented in a tool CTLDC [14] allows us to compute this worst case latency.
Tool DCSynth provides facilities for the computation of each of these performance measures for a synthesized shield. The reader may refer to the original papers on DCSynth [18,15] for details of techniques by which such performance can be measured. In this paper, we synthesize shields with different hard deviation constraints and we provide a comparison of the performance of these shields. This allows us to draw some preliminary conclusions. Clearly, much wider experimentation is needed for firmer insight. The rest of the paper is organized as follows. Section 2.1 describes the syntax and semantics of the logic QDDC. Section 2.3 gives the syntax of DCSynth specification and brief outline of the synthesis method. Section 3 describes the various logical notions of shield specification. Section 4 describes metrics to evaluate the shield performance and corresponding experimental results. In Section 5, we conclude the paper with discussion and related work.
Preliminaries
We provide a brief overview of logic QDDC as well as the soft requirement guided H-optimal controller synthesis method implemented in tool DCSynth. This method and tool is applied to the problem of run-time enforcement shield synthesis in this paper. The reader may refer to the original paper [18] for further details of these preliminaries.
Quantified Discrete Duration Calculus (QDDC) Logic
Let PV be a finite non-empty set of propositional variables. Let σ a non-empty finite word over the alphabet 2 PV . It has the form σ = P 0 · · · P n where P i ⊆ PV for each i ∈ {0, . . . , n}. Let len(σ ) = n + 1, The syntax of a propositional formula over variables PV is given by: with &&, ||, ! denoting conjunction, dis-junction and negation, respectively. Operators such as ⇒ and ⇔ are defined as usual.
Let Ω(PV ) be the set of all propositional formulas over variables PV . Let i ∈ dom(σ ). Then the satisfaction of propositional formula ϕ at point i, denoted σ , i |= ϕ is defined as usual and omitted here for brevity. The syntax of a QDDC formula over variables PV is given by: An interval over a word σ is of the form [b, e] where b, e ∈ dom(σ ) and b ≤ e. Let Intv(σ ) be the set of all intervals over σ . Let σ be a word over 2 PV , let [b, e] ∈ Intv(σ ) be an interval. Then the satisfaction of a QDDC formula D written as σ , [b, e] |= D, is defined inductively as follows: with Boolean combinations !D, D 1 || D 2 and D 1 && D 2 defined in the expected way. We call word σ ′ a p-variant, p ∈ PV , of a word σ if ∀i ∈ dom(σ ), ∀q = p : Example 1. We give an example QDDC formula over propositions {p, q, r} which specifies a typical recurrent reach-avoid behaviour required in many control systems. Intuitively, the formula ϕ until (n) holds at a position i in the behaviour if, since the previous occurrence of r, the proposition p persists till an occurrence of q. Moreover, q must occur within n time units from the last occurrence of r. For example, here r may denote entering of enemy air-space, p may denote that the UAV is invisible and q may denote that the target is reached. Let ϕ 3 abbreviate ϕ until (3). Figure 1 gives a possible behaviour σ where the last row gives the value of σ , i |= ϕ 3 for each position i.
Theorem 2. [13] For every formula D over variables PV we can construct a Deterministic Finite Automaton (DFA) A (D) over alphabet 2 PV such that L(A (D)) = L(D). We call A (D) a formula automaton for D or the monitor automaton for D.
A tool DCVALID implements this formula automaton construction in an efficient manner by internally using the tool MONA [8]. It gives minimal, deterministic automaton (DFA) for the formula D. We omit the details here. However, the reader may refer to several papers on QDDC for detailed description and examples of QDDC specifications as well as its model checking tool DCVALID [13,11,12].
In the rest of the paper we consider QDDC formulas and automata where variables PV = I ∪ O are partitioned into disjoint sets of input variables I and output variables O. Such a formula/automaton specifies a relation between inputs and outputs.
For technical convenience, we define a notion of indicator variable for a QDDC formula (regular property). The idea is that the indicator variable w witnesses the truth of a formula D at any point in execution. Thus,
This composition gives a formula over input-output variables (I, O ∪W ).
Cascade composition provides a useful ability to modularize a formula using auxiliary propositions W which witness other regular properties given as QDDC formulas.
Supervisors and Controllers
Now we consider QDDC formulas and automata where variables PV = I ∪ O are partitioned into disjoint sets of input variables I and output variables O. We show how Mealy machines can be represented as special form of Deterministic finite automata (DFA). Supervisors and controllers are Mealy machines with special properties. This representation allows us to use the MONA DFA library [8] to efficiently compute supervisors and controllers in our tool DCSynth. Intuition is that the transitions from q ∈ F to r are forbidden (and kept only for making the DFA total). Language of any such Mealy machine is prefix-closed. Recall that for a Mealy machine,
Definition 5 (Output-nondeterministic Mealy Machines). A total and Deterministic Finite Automaton (DFA) over input-output alphabet
It follows that for all input sequences a non-blocking Mealy machine can produce one or more output sequence without ever getting into the reject state.
For a Mealy machine M over variables Here σ , ii, oo must have the same length. We will not distinguish between σ and (ii, oo) in the rest of the paper. Also, for any input sequence ii ∈ (2 I ) * , we will define Definition 6 (Controllers and Supervisors). An output-nondeterministic Mealy machine which is nonblocking is called a supervisor. A deterministic supervisor is called a controller.
The non-deterministic choice of outputs in a supervisor denotes unresolved decision. The determinism ordering below allows supervisors to be refined into controllers.
Note that being supervisors, they are both non-blocking, and hence / 0 for any ii ∈ (2 I ) * . The supervisor Sup 2 may make use of additional memory for resolving and pruning the non-determinism in Sup 1 .
DCSynth Specification and Controller Synthesis
This section gives a brief overview of the soft requirement guided controller synthesis method from QDDC formulas. The method is implemented in a tool DCSynth. (See [18] for details). This method and the tool will be used for synthesis of run-time enforcement shields in the subsequent sections. A well-known greatest fixed point algorithm for safety synthesis over A (D) gives us MPS(D) if it is realizable. We omit the details here (see [18]). Proposition 9 (MPS Monotonicity). Given QDDC formulas D 1 and D 2 over variables (I, O) such that |= (D 1 ⇒ D 2 ), we have: The controller synthesis goes through following three stages.
Specification and Synthesis of Run-time Enforcement Shields
Given a correctness requirement REQ(I, O) as a QDDC formula over input-output propositions (I, O), a system with sporadic errors (SSE) may fail to meet the requirement at some of the points in a behaviour (ii, oo). (The reader may recall Definition 5 and its following two paragraphs for the notation.) A runtime enforcement shield is a Mealy machine with input variables I ∪ O and output variable O ′ . See Figure 2. For any input (ii, oo) the shield produces a modified output oo ′ such that (ii, oo ′ ) invariantly satisfies the correctness requirement REQ(I, O ′ ). Moreover, the output oo ′ must deviate from the SSE output oo as little as possible to maintain quality. There are several distinct notions of "deviating as little as possible" leading to different shields. In this section, we give a logical framework for specifying various shields by using the logic QDDC. We then provide an automatic synthesis of a run-time enforcement shield from its logical specification using the tool DCSynth of the previous section. Thus, we achieve a logical specification and a uniform synthesis method for shields.
Deviation constraints specify the extent of allowed deviation in a shield's behaviour. Our specification has hard deviation constraint HDC which must be mandatorily and invariantly satisfied by the shield. (This is similar to the hard requirement in DCSynth.) We also define a canonical soft deviation constraint Hamming(O, O ′ ) which will be useful in minimizing cumulative deviation during synthesis. Overall, a shield specification consists of a pair (REQ, HDC).
INDDEF = Ind( REQ(I, O), SSEOK),
, Deviation) A hard deviation constraint HDC is a QDDC formula over propositions SSEOK and Deviation. It specifies a constraint on Deviation conditional upon the behaviour of SSEOK. In Subsection 3.4, we will give a list of several different hard deviation constraints.
For shield synthesis using DCSynth, we define the QDDC formula HShield given in Equation 1) as the hard requirement over the input-output propositions (I ∪ O, O ′ ). Notice that in its formulation, we use the cascade composition from Definition 3. This allows us to modularize the specification into components REQ and HDC.
The constraint (QDDC formula) HShield must be invariantly satisfied by the shield. Tool DCSynth gives us a maximally permissive supervisor MPS(HShield) with this property (See definition 8). This supervisor can be termed as shield-supervisor without deviation minimization and it will be denoted by MPS(REQ, HDC).
Soft Deviation Constraint
While HDC already places some constraints on the permitted deviation, we can further optimize the deviation in supervisor MPS(REQ, HDC) of the previous section. Quantitative optimization techniques from Markov Decision Processes can be used. (Stocasticity comes from the distribution of inputs to the shield.) The tool DCSynth allows us to specify such optimization using a list of soft requirement formulas with weights. The tool optimizes a supervisor to a sub-supervisor which maximizes the expected value of cumulative weight of soft requirements over next H-steps. This cumulative weight is averaged over all input sequences of length H. See Section 2.3 and [18] for further details.
We make use of this H-optimal sub-supervisor computation to get a sub-supervisor which minimizes the expected cumulative deviation over next H-steps.
Determinization
The reader must note that both the shield-supervisors MPS(REQ, HDC) and MPHOS(REQ, HDC, H) are output non-deterministic. Multiple choice of outputs may satisfy the hard deviation constraints while being H-optimal for the soft deviation constraint. Any arbitrary resolution of the output non-determinism will preserve the invariance guarantees and H-optimality (see [18]).
In our method, we allow the user to specify a preference ordering ord on the shield outputs 2 O ′ . A lexicographically ordered list of output literals is given as explained in Example 10. A deterministic controller is obtained by retaining only the highest ordered output from the non-deterministic choice of outputs offered by the supervisor. Thus, given a preference ordering ord we can obtain shields (deterministic controllers) Det ord (MPS(REQ, HDC)) and Det ord (MPHOS (REQ, HDC, H)).
In summary, given a correctness requirement REQ(I, O) to be enforced by the shield, a hard deviation constraint HDC(SSEOK, Deviation), a horizon value H (for globally minimizing the deviation over next H steps) and a preference ordering ord on shield outputs 2 O ′ , we can synthesize shields Det ord (MPS(REQ, HDC)) and Det ord (MPHOS (REQ, HDC, H)). When ord, REQ, HDC, H are clear from context, these shields are referred to as Shield NoDM (shield with no deviation minimization) and Shield DM (shield with deviation minimization), respectively.
Variety of Hard Deviation Constraints and Shield-Types
In Table 1 We provide some explanation and comments on these specifications. [21] which locally optimizes deviation at each step without any look-ahead into the future. Larger horizon values give superior shields which improve the probability of non-deviation in long run, as shown by our experiments which are reported later in this paper.
• A k-shield (V 1 ) specifies (as its hard deviation constraint) that for any observation interval the deviation can invariantly happen for at most k cycles. Thus, a burst of deviation has length of at most k cycles. The k-shield (V 1 ) specifies that this property must hold unconditionally. Such a specification is often unrealizable. For example, if SSE makes consecutive errors for more than k cycles, the shield may be forced to deviate for all of these cycles. Hence, several variants of the V 1 shield have been considered.
• The k-stabilizing shield (V 2 ) specifies that the shield may deviate as long as SSE makes errors (even burst errors). Once SSE recovers from deviation (indicated by SSEOK becoming and remaining true), the shield may deviate for at most k cycles. Thus, the shield must recover from deviation within k cycles once SSEOK is established and maintained. Also, there must be no spurious deviation due to conjunct NoSpuriousDeviation. This specification precisely gives the k-stabilizing shield without fail-safe state, originally defined by Konighofer et al. [9]. By a variation of this, the k-stabilizing shield with fail-safe state [9] can also be specified but we omit this here.
• We define a new notion of shield called e, d-shield (V 3 ). This states that in any observation interval if the count of errors by SSE (given by the term (scount !SSEOK)) is at most e then the count of number of cycles with deviations (given by the term (scount Deviation)) is at most d. Thus e errors lead to at most d deviations. Also, there is no spurious deviation due to the conjunct NoSpuriousDeviation.
It may be noted that irrespective of the shield type the synthesized shield have to meet the requirement REQ(I, O ′ ) invariantly as specified by the formula HShield (See Equation 1).
Performance Measurement Metrics and Experiments
In this section we give the experimental results for shield synthesis carried out in our framework. We first benchmark the performance of our tool and compare it with some other tools for shield synthesis in Section 4.1. In Section 4.2 we define some performance measurement metrics for shields and we use these to compare various shield types.
Performance of Tool DCSynth in Shield Synthesis
We have synthesized Burst-shield V 0 with deviation minimization using DCSynth for all the benchmark examples given in [21]. The results are tabulated in Table 2. All our experiments were conducted on Linux (Ubuntu 18.04) system with Intel i5 64 bit, 2.5 GHz processor and 4 GB memory. The formula automata files of Wu et al. [19] were used in place of QDDC formulas for uniformity. For a comparision with other tools, the results for the k-stabilizing shield synthesis and the Burst-error shield synthesis for the same examples are reproduced directly from Wu et al. [21]. As these are for unknown hardware setup, a direct comparison of the synthesis times with the DCSynth synthesis times is only indicative.
As the table suggests, in most of the cases, the shield synthesized by DCSynth compares favorably with the results reported in literature [21], both in terms of the size of the shield and the time taken for the synthesis. Recall that DCSynth uses aggressive minimization to obtain smaller shields. As an example, for the specification AMBA G5+6+9e64+10, our tool synthesizes a shield significantly faster and with smaller number of states than the existing tools [2,21].
Comparison between various shield notions
For comparing the performance of shields synthesized with different shield types, we define the following performance metrics. The construction of the desired DTMC is as follows. The product S × A (D) gives a finite state automaton with the same behaviours as S. Moreover, it is in accepting state exactly when D holds for the past behaviour. (Here A (D) works as a total deterministic monitor automaton for D without restricting S). By assigning uniform discrete probabilities to all the inputs from any state, we obtain the DTMC M uni f (S, D) along with a designated set of accepting states. The DTMC is in accepting state precisely when D holds. Standard techniques from Markov chain analysis allow us to compute the probability (Expected value) of being in the set of accepting states on long runs (steady state) of the DTMC. This gives us the desired value E uni f (S, D). A leading probabilistic model checking tool MRMC implements this computation [7]. In DCSynth, we provide a facility to compute M uni f (S, D) in a format accepted by the tool MRMC. Hence, using DCSynth and MRMC, we are able to compute E uni f (S, D).
The expected value of a shield S being in a non-deviating state over long runs can be computed as E uni f (S,true^<!Deviation>).
Worst Case Burst-Deviation Latency: The worst case burst-deviation latency gives the maximum number of consecutive cycles for which the shield deviates even when the SSE is satisfying the requirement. Thus, it denotes the maximum length of an interval in the behaviour of the shield for which the formula "SSEOK && Deviation" holds invariantly.
Given a Shield S and a QDDC formula D, the latency goal MAX LEN(D, S) computes
Experiments and Findings
We can use the expected value of deviation and the worst case burst-deviation latency, defined above, for comparing the shields obtained using various shield-types defined in Section 3.4. We synthesized various shields for the correctness requirement ϕ until (n) given in Example 1 with n = 5 and the inputoutput propositions ({r}, {p, q}). The output propositions of synthesized shield are {p ′ , q ′ }. For each shield type V i given in Table 1, the deterministic shields V i NoDM and V i DM were synthesized as outlined in the last paragraph of Section 3.3. Here V i NoDM denotes shield synthesized without deviation minimization where as V i DM denotes the shield obtained with deviation minimization optimization. The shield-supervisors were determinized with the preference ordering (!q ′ >!p ′ ) on outputs. Table 3 gives the results obtained. We report the number of states of the shield along with the time taken (in seconds) by the tool DCSynth to compute the shield. Moreover, for comparing the performance of the resulting shields, their Expected Value of non-deviation as well as the worst case burst-deviation latency are reported in the table under the columns titled Expected Value and Latency, respectively.
It is observed that with deviation minimization optimization, several different shield types resulted in identical shields, although the time to synthesize them differed. For example, shields in rows numbered 10 to 15 are identical. We indicate such a situation by merging the corresponding rows to a single cell. We give our findings below.
• The k-shield (V 1 ) is unrealizable as expected. See its description in Section 3.4 for an explanation.
All the other shield types are found to be realizable.
• For shield synthesis without deviation minimization, we obtain distinct shields with distinct performance for each shield type. The Burst shield (V 0 ) has the poorest performance (expected nondeviation 0.25 and latency ∞) as it enforces trivial hard deviation requirement true. The best • The performance of the shield considerably improves with the deviation minimization (DM) optimization. Expected value of 0.85 compares well against the best value of 0.74 without deviation minimization. Also burst-deviation latency drops to 0 with DM. We also notice that the performance improves with increase in the horizon value when using DM. This is intuitively clear as the tool performs global optimization across larger number of steps of look-ahead with increased horizon.
• For shield synthesis with deviation minimization optimization, all the different shield types V 0 ,V 2 ,V 3 resulted in identical shield for a given value of horizon H. Thus shields in rows 10-15 (synthesized with H = 0) and rows 16-21 (synthesized with H = 10) are found to be identical. This shows that deviation minimization effectively supersedes the different hard deviation guarantees provided by the HDC. While this is not theoretically guaranteed, our experience with robust controller synthesis also indicates the overwhelming effectiveness of the DM-like optimization [15].
Discussion and Related Work
In this paper we have presented a logical framework for specifying error-correcting run-time enforcement shields using formulas of logic QDDC. The specification contains a correctness requirement REQ, specifying the desired input-output relation to be maintained, as well as a hard deviation constraint HDC which specifies a constraint on deviation between the system output and the shield output. Our shield synthesis gives a shield which invariantly satisfies both REQ and HDC. Moreover, a powerful optimization globally minimizes the cumulative deviation between the system and the shield output. The idea of error-correcting run-time enforcement shield was proposed in the pioneering work of Bloem et al. [2], where the notion of k-stabilizing shield (with a synthesis algorithm) was proposed. This was further enhanced by Konighofer et al. [9]. Extension of shield synthesis to liveness properties has also been explored in this paper. Wu et al. [21,20] defined the burst shield which is capable of handling burst errors. Moreover, they proposed optimizing the shield with the choice of output which locally minimizes the deviation at each stage. In this paper, we have enhanced this with global optimization of cumulative deviation across next H steps.
In our method, the shield is logically specified using QDDC formulas and a uniform method for the synthesis of the shield is proposed. A tool DCSynth implements the synthesis method. Logic QDDC [13,12,11] with its interval logic modalities, threshold counting constraints, regular expression like constructs and second-order quantification over temporal variables provides a very rich vocabulary to specify both the system requirements and the deviation constraints. Logic QDDC is a discrete time version of Duration Calculus proposed by Zhou, Hoare and Ravn [5,4] with known automata theoretic decision and model checking procedures [13,3,17,10]. Using the proposed technique, we have specified the k-stabilizing shield of Konighofer et al. [9], the burst shield of Wu et al. [21,20], as well as a new e, dshield. Moreover, we have measured the performance of the shields resulting from these different criteria in terms of the expected value of deviation in long runs, as well as the worst case burst deviation latency. Our experiments show an overwhelming impact of global deviation minimization on the quality of the shield. At the same time, hard deviation constraints provide a conditional hard guarantee on the worst case deviation. Hence, the combination of hard deviation constraint together with global minimization of deviation is useful.
Konighofer et al. [9] as well as Ehlers and Topku [6] propose controller/shield synthesis technique for optimal achievable value of parameter k in a regular specification. By contrast, our current method requires k to be specified. In our future work, we will address similar optimal parametric synthesis from parameterized QDDC specifications. | 2019-09-17T08:59:14.000Z | 2019-09-17T00:00:00.000 | {
"year": 2019,
"sha1": "fac73d8bc22dca662dcbbc11795f59bf475318de",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/1909.08541",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "fac73d8bc22dca662dcbbc11795f59bf475318de",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
209927777 | pes2o/s2orc | v3-fos-license | The Space – Time is Flat at an Absolute Free Space. It is the Mass that Makes Space – Time Curved in. The Physical Time is Discrete or Continuous is An Observer Dependent Realism
According to Einstein, the astronomical bodies try to move in a straight line – it is the curved space – time that makes their paths curved in. This paper proposes that the space – time is originally a flat space – time (at an absolute free space), it is the presence of mass that makes space – time curved in. Whether the physical time is discrete or continuous, is an observer dependent realism only. An observer like human being uses neither too small units of time nor too big units of time. An observer like human being uses average or moderate units of time which makes time continuous and flat. The physical time is discrete and flat for too small units of time. The physical time is continuous and curved in for too big units of time. The space – time can be curved in into a point for infinite mass concentrated into a point. Theoretically, it should be the center of our universe.
I. Introduction
Theory1: Without any significant mass or energy, any free space is an absolute free space. A free space is called an absolute free space if it is devoid of any significant and effective mass or energy.
Theory2: The space -time is absolutely flat at an absolute free space.
Theory3: The presence of significant mass or energy makes space -time curved in.
Theory4: The physical time is discrete and flat for too low units of time, whereas, the physical time is continuous and curved in for too big units of time. An observer with average or moderate units of time sees everything continuous and flat (Human beings see everything continuous and flat with their eyes).
Theory5: The space -time can be curved in into a point for infinite mass concentrated into a point. Theoretically (without any experimental proof), it is the center of our universe.
Theory 6: The physical time is discrete or continuous is an observer dependent realism only. Some observers can prove that time is discrete and flat, some other observers can also prove that time is continuous and curved in. Apart from them, some other class of observers can prove that time is curved in into a point with infinite derivatives.
Theory7: The total space times the total time around a boundary is a constant, it is called the boundary condition of space -time of the Universe.
The space -time is flat at absolute free space, a free space devoid of any significant mass or energy. The example can be the space between two adjacent galaxies. That is why all astronomical bodies try to travel in a straight line because originally the space -time is flat. It is the presence of mass that makes space -time curved in. The Earth would follow a flat space -time if it were in the absolute free space, because of the presence of Sun, Earth follow a curved in curvature path around the Sun. Due to the presence of significant mass or energy, the space -time looks curved in curvature. A huge energy trapped into a small space is called mass. Let's take, mass is m , energy is E , the speed of light is C . Then, according to the Einstein's mass-energy It proves that mass and energy are different phenomena of the same thing [II]. There is no known easy way to release energy from a given mass except the Hawking's Radiation which releases huge amount of energy from a huge mass consumed by super massive Black Holes at the center of every galaxy [III]. Without mass and energy, the Universe would be three dimensional space -time, thus, would be flat space -time with two dimensional space surface and one dimensional time. Because of the presence of mass and energy make the Universe a four dimensional spacetime with three dimensional spaces and one dimensional time. Thus, mass and energy have created an extra spatial dimension in the Universe to make it four dimensional space -time. Thus, extra third spatial dimension makes original flat space -time into a curved in curvature space -time.
II. The Universe Obeys Repetitive Nature
The Universe obeys repetitive nature [IV]. We start with atoms made up of protons, neutrons and electrons [V]. All atoms are made up in the same way in all elements and their compounds [VI]. The basic building block is the same and it is repetitive every time. It has a center with concentrated mass and electrons are revolving around the mass called nucleus [VII]. The solar system is the repetitive nature of an atom with concentrated mass at the center (Sun) and planets are revolving around the center [VIII]. There are billions of solar system in our galaxy, thus our solar system is repetitive in nature to produce billions of solar systems. Each galaxy has a center with super massive Black Hole which has the concentrated mass of the galaxy in the center and billions of solar systems are revolving around it [IX]. Each galaxy is repetitive in nature to have billions of galaxies in our Universe. Thus, the Universe must have a center with infinite concentrated mass that makes spacetime curved in into a point with infinite derivative of space -time. And billions of galaxies are revolving around the center to make our Universe. So, there is a center in the Universe, at least theoretically (without any experimental proof). Anything curved in will be ended up as circles or ellipses (distorted circles). So there are two possible space -time in the Universe; one is a circle and the other is an ellipse.
Let's take, space is S , the physical time isT , then a circle looks like Where is a cosmological constant.
An ellipse looks like Where , , are cosmological constants.
An ideal circle is a theoretical abstraction only. Practically we get ellipses in the Universe as most common shape. At the center of the Universe, the space -time is curved in with infinite derivative. The curvature of space -time and its derivative gets reduced with the increase of , in the ellipse. Thus, the further away from the center of the Universe, the lesser is the curvature of space -time and its derivative. The figure one shows the elliptical shaped Universe with a center at the middle of the Universe. The loop (at the center) has huge mass concentrated into the center with infinite density. Because of enormous mass at the center, if we assume mass is the deformation of space -time, a huge space is concentrated into the center. The total amount of space times the total amount of time around a boundary of the universe is a constant.
Let's take, space is S , the physical time isT , then the Universe at any boundary Because at the center enormously huge space is trapped into a small volume is tends to infinity, then tens to zero, but not zero. It means that time is very slow in the center of the Universe. As the Boundary gets bigger and bigger, time becomes faster and faster.That is why, distant galaxies moving faster than nearer galaxies because they are in outer boundary of the Universe. Because space is expanding, the Universe is getting more and more space, time becomes slower and slower for the Universe.
III. Conclusion
Without any significant mass or energy, any free space is an absolute free space, although it is a theoretical abstraction only. Theoretically, a free space is called an absolute free space if it is devoid of any significant and effective mass or energy. The space -time is absolutely flat at absolute free space (a three dimensional spacetime with two dimensional flat space and one dimensional time). The presence of significant mass or energy makes space -time curved in (a four dimensional space - | 2019-11-07T15:03:57.808Z | 2019-10-28T00:00:00.000 | {
"year": 2019,
"sha1": "3f9dddd20833a6fcfcc6288b6af23d1413d2ac87",
"oa_license": null,
"oa_url": "http://www.journalimcms.org/wp-content/uploads/journal_download.php?postid=3059",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2f7987cbc470af73effe6684302c69f042b8663f",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Physics"
]
} |
9542979 | pes2o/s2orc | v3-fos-license | Role of prostacyclin in pulmonary hypertension
Prostacyclin is a powerful cardioprotective hormone released by the endothelium of all blood vessels. Prostacyclin exists in equilibrium with other vasoactive hormones and a disturbance in the balance of these factors leads to cardiovascular disease including pulmonary arterial hypertension. Since it's discovery in the 1970s concerted efforts have been made to make the best therapeutic utility of prostacyclin, particularly in the treatment of pulmonary arterial hypertension. This has centred on working out the detailed pharmacology of prostacyclin and then synthesising new molecules based on its structure that are more stable or more easily tolerated. In addition, newer molecules have been developed that are not analogues of prostacyclin but that target the receptors that prostacyclin activates. Prostacyclin and related drugs have without doubt revolutionised the treatment and management of pulmonary arterial hypertension but are seriously limited by side effects within the systemic circulation. With the dawn of nanomedicine and targeted drug or stem cell delivery systems it will, in the very near future, be possible to make new formulations of prostacyclin that can evade the systemic circulation allowing for safe delivery to the pulmonary vessels. In this way, the full therapeutic potential of prostacyclin can be realised opening the possibility that pulmonary arterial hypertension will become, if not curable, a chronic manageable disease that is no longer fatal. This review discusses these and other issues relating to prostacyclin and its use in pulmonary arterial hypertension.
DISCOVERY
Prostacyclin is very important cardio protective lipid mediator released by blood vessels. It is one member of the eicosanoid family of mediators, which also include prostaglandins, thromboxanes and leukotrienes. Prostacyclin was discovered in 1976 by a group led by Salvador Moncada and John Vane. 1 Initially called prostaglandin (PG)X, prostacyclin was identified as an unknown lipid mediator formed by microsomes prepared from rabbit or pig aortas that inhibited human platelet aggregation and relaxed some preparations of isolated blood vessels. Early studies showed that PGX is the major metabolite of arachidonic acid in the arterial walls of a number of species, including man. 2 PGX was later identified as 5z-5,6-didehydro-9-deoxy-6,9a-epoxyprostaglandin F 1 and renamed as prostacyclin. 3 Early studies attributed prostacyclin release as the mechanism mediating the anti-thrombotic properties of the endothelium 4 and its place as a fundamental mediator in cardiovascular health was set. A current (2014) PubMed search of the term 'prostacyclin' generates 17958 hits with 1992 hits for the terms 'prostacyclin' and 'pulmonary hypertension'. Pulmonary hypertension is a devastating, progressive and ultimately fatal condition with few treatment options, which, at best slow progression but do not cure the disease. Traditionally drugs have been designed to target the pulmonary vasculature as either vasodilators or inhibitors of smooth muscle remodeling. Most recently the right heart, which fails under the burden of extra work exerted on it by increased pulmonary pressures, has become a viable therapeutic target in the search for new drugs to treat pulmonary hypertension.
This review will cover what is known about the synthetic and receptor pathways associated with prostacyclin and how this knowledge has been applied and translated to produce treatments. Specifically the review will discuss how the known actions of prostacyclin provide a compelling case for its utility for treatment of both pulmonary vessels and the right heart. The review will also identify the limitations of prostacyclin therapies and speculate upon how modern medical technologies might be applied to improve its utility in this disease. Finally, with the idea that pulmonary arterial hypertension may, in the future, be treated with stem cell therapies to supplement organ regeneration and/or transplant, the potential role of prostacyclin in these approaches will be highlighted.
SYNTHESIS OF PROSTACYCLIN
Endothelial cells are the predominant source of prostacyclin in the body and prostacyclin is the main eicosanoid made by endothelial cells. As described below and illustrated in Figure 1 there are three key steps to the synthesis of prostacyclin. Prostacyclin is synthesised from the 20 carbon fatty acid (20:4) arachidonic acid by the concerted actions of cyclo-oxygeanse (COX) and prostacyclin synthase 5 ( Figure 1). The first step involves liberation of arachidonic acid from stores. Arachidonic acid is not normally free in cells but acetylated in membrane phospholipids. The best-studied pathway for arachidonic acid liberation involves phospholipase A 2 ( Figure 1). There are multiple forms of phospholipase A 2 but cytosolic forms (cPLA 2 ) and, in some circumstances, calcium-independent PLA 2 (iPLA 2 ) are thought to drive arachidonic acid liberation in endothelial cells. Arachidonic acid can also be liberated through a second pathway after phospholipase C cleaves an inositol triphosphate group, giving diacylglycerol (DAG), which can then be hydrolyzed by lipases to monoacylglycerol and then to free arachidonic acid and glycerol.
Once free inside the cell arachidonic acid is metabolized by various enzymatic and non-enzymatic routes to eicosanoids (or icosanoids; lipid mediators derived from 20 carbon fatty acids). The second step in prostacyclin formation is metabolism of arachidonic acid by COX in two stages. In the first stage arachidonic acid is converted to prostaglandin (PG)G 2 via an oxygenase reaction and then in the second stage, to PGH 2 by a peroxidase reaction. The third and final stage in prostacyclin synthesis is the metabolism of PGH 2 by prostacyclin synthase, which is one of a number of synthase enzymes downstream of COX ( Figure 1). It is the relative expression of these PG synthase enzymes that critically dictate the profile of prostanoids released by a given cell type under different conditions. For example, endothelial cells and platelets both express the isoform COX-1 but prostacyclin synthase is highly expressed in endothelial cells with little or no thromboxane synthase. In contrast, in platelets thromboxane synthase is highly expressed whilst there are negligible levels of prostacyclin synthase. As a result, despite both tissue types being high expressers of COX-1, the prostanoid products they produce are highly polarized and, in this way, perform diametrically opposed functions within the cardiovascular system. Like COX, prostacyclin synthase is a P450 enzyme, expression of which in endothelial cells, is regulated by shear stress and growth factors.
Pulmonary arterial hypertension is classically associated with reduced vasodilators (including prostacyclin) and increased vasoconstrictors, which is why current therapies rely so heavily on manipulation of vasoactive pathways. Specifically, in terms of eicosanoids, pulmonary arterial hypertension is associated with reduced urinary markers of prostacyclin and increased markers of thromboxane. 6 This is in line with reduced prostacyclin synthase in lungs of patients with pulmonary arterial hypertension. 7 Further, transgenic mice overexpressing prostacyclin synthase or mice inoculated with prostacyclin synthase gene 8,9 are protected from development of disease symptoms. 10 Prostacyclin synthase gene delivery in the form of genetically modified stem cells has also been reported to protect against development of experimental pulmonary arterial hypertension. 11,12 Once formed by endothelial cells prostacyclin doesn't simply diffuse out of cells but is exported by highly regulated transporter systems, most likely of the ATP-binding cassette transporters (ABC) 13 class with the likely member of this class most used for prostanoids, including prostacyclin, being multidrug resistance protein 4 (MRP4/ABCC4). 14 Once released from cells prostacyclin is then free to act on receptors to mediate its actions. The idea that pulmonary arterial hypertension may be associated with reduced secretion of prostacyclin at the level of a transporter has not been addressed and, where tested, inhibition of MRCP4 leads to protection in animal models attributed to an action on cGMP/cGMP transport. 15 Nevertheless, the lack of literature in this area suggests that elucidation of precise mechanisms of prostacyclin flux in pulmonary vessels during disease may provide insight and new drug targets.
RECEPTOR PATHWAYS UTILIZED BY PROSTACYCLIN AND IMPLICATIONS FOR TREATMENTS IN PULMONARY ARTERIAL HYPERTENSION
Once released by blood vessels prostacyclin produces its powerful protective effects on the vasculature and platelets by activating cell surface receptors and in some tissues by activation of cytosolic peroxisome proliferator-activated receptors (PPAR). For prostacyclin, the favored cell surface receptor is known as the 'IP' receptor ( Figure 2). IP receptors are members of the large and diverse group of receptors known as G protein-coupled receptors (GPCRs). In the case of prostacyclin, IP receptors are coupled to activation of the enzyme adenylate cyclase which converts ATP to the powerful second messenger cAMP ( Figure 2). The biological effects of cAMP in a given tissue, which are diverse, are mediated by activation of cAMP-dependent protein kinases (also known as protein kinase A) and Exchange protein activated by cAMP (Epac; Figure 2). In vascular smooth muscle cAMP mediates relaxation and reduces proliferation and in platelets reduces thrombosis via regulation of calcium levels and associated pathways. Protein kinase A and Epac act synergistically to inhibit vascular smooth muscle cell proliferation 16 and whilst pulmonary artery smooth muscle cells express both of these pathways, preliminary studies suggest that Epac is down regulated in pulmonary hypertension. 17 As with other prostanoids, the complicating pharmacological feature of prostacyclin is that, whilst its acts preferentially on its designated subtype (i.e. IP) receptors, it can cross over and activate any of the other prostanoid receptors in particular circumstances ( Figure 2). This means that, for example, where IP receptors are limiting, prostacyclin can activate thromboxane (TP) receptors. As mentioned above, the opposing properties of thromboxane and prostacyclin in the cardiovascular system are critical to the maintenance of vascular health. This balance is broken when thromboxane is produced in excess, or, similarly where IP receptors are saturated. In these settings prostacyclin becomes a mimetic for thromboxane inducing vasoconstriction. Prostacyclin and related drugs can also cross over onto constrictor EP and FP receptors, which, as with TP, can limit the dilator actions of prostacyclin, as well as dilator EP and DP receptors which may have a beneficial effect. This issue of specificity is of relevance to the use of prostacyclin drugs to treat pulmonary arterial hypertension since there is, as with all pharmaceutical preparations, the danger of overriding local sensing pathways.
The existence of multiple IP receptor subtypes has been suggested in some tissues but these observations are based on pharmacological studies and have not been validated at the gene level. Nonetheless, the authors of a recent study claim to have conclusively identified two IP receptor subtypes using a human airway epithelial cell line exposed to a host of IP agonists in the presence or absence of a selective IP antagonist. 18 Whilst this observation is potentially very important, it remains to be seen whether the distinct IP receptor subtypes can be identified in other human cells.
The potential for GPCRs to homo-and hetero-dimerize is well established. IP receptors can form homodimers via the interactions of disulphide bonds. 19 Importantly IP receptors may also form heterodimerize with thromboxane TP receptors ( 20 ). The IP-TPa complex has been suggested to have a protective role in promoting a "PGI 2 -like" response from TPa activation by TP ligands. 20 After activation, IP receptors are desensitized by PKC-dependent phosphorylation 21 . PGI 2 acts preferentially on cell surface IP receptors that activate adenylate cyclase to convert ATP to cAMP. cAMP activates protein kinase A and Exchange protein activated by cAMP (Epac). In blood vessels and platelets this results in calcium sequestration and inhibition of activation equating to vessel relaxation, reduced remodeling and reduced thrombosis respectively. These signaling events also reduce inflammation. In the cytosol, PGI 2 activates PPARb receptors which work by three discreet pathways. Firstly, PPARb binds to RXR to drive transcription of target genes. Secondly it represses BCL6 and thirdly, when activated it can bind and repress PKCa. Activation of PPARb can lead to similar functional effects to activation of IP receptors, although by these very different pathways. When PGI 2 is present in excess and these pathways are overwhelmed it can activate other prostanoid receptors leading to IP-type signaling in the case of EP 2 , EP 4 and DP or to functionally opposing effects in the case of TP, FP, EP 1 or EP 3 receptors.
which constitute endogenous pathways to regulate and limit prostacyclin signalling. As is common for drugs acting on natural receptor pathways the prospect of IP desensitization/internalization may be a confounding factor in utilizing prostacyclin analogues as therapeutic interventions and, as discussed can shunt biological responses away from dilator to constrictor pathways. Evidence of desensitisation of IP receptors and/or their down stream pathways has been noted in clinical studies. In line with this continuously infused epoprostenol is associated with tolerance in patients with severe pulmonary hypertension, and dose adjustments have to be made to maintain clinical effects. 23,24 Indeed, in patients with pulmonary arterial hypertension secondary to COPD the dilator effects of epoprostenol on pulmonary pressures were subject to tachyphylaxis within 24 hours. 25 Prostacyclin can also work by activating the cytosolic nuclear receptor PPARb 26 ( Figure 2). PPARb is considered to be anti-inflammatory in a number of settings where it acts by genomic and non-genomic mechanisms 26 (Figure 2). Importantly for the treatment of pulmonary arterial hypertension, the prostacyclin drug, treprostinil, activates PPARb in platelets, 27 lung fibroblasts 28 and blood vessels. 29 Work from our group and others has also shown that selective, non-IP, PPARb agonists relax pulmonary artery smooth muscle cells 30 and prevent hypertension in an hypoxic rat model. 29 In animal models we found that whilst the PPARb agonist GW0742 prevented pulmonary arterial hypertension, reducing right heart hypertrophy, it did not reduce muscularization of vessels in the lung. 29 This suggested to us that PPARb agonists might have a protective action directly on the right heart in pulmonary arterial hypertension. Recently we, with collaborators, tested this idea using a pulmonary artery banding model where workload is applied to the right heart mechanically without any contribution from pulmonary pressure per se. 31 Of direct relevance, others have shown that PPARb activation in adult hearts facilitates mitochondrial function and improves cardiac performance under pressure-overload conditions. 32 In our study, GW0742 prevented right heart remodeling and transcripomic profiling of heart tissue suggested that the mechanism was classically genomic involving the PPAR target gene Angptl4. 31 Angptl4 is a member of the angiopoietin-like family and regulates angiogenesis and lipid metabolism. While no data currently exist relating Angptl4 to idiopathic pulmonary arterial hypertension, it has recently been associated with high-altitude adaptation in Tibet 33 and Angptl4 is associated with left heart failure where it protects against myocardial infarction and no reflow through preservation of vascular integrity. 34 These observations support the idea that Angptl4 may be a viable mechanism by which PPARb activation leads to cardioprotection and suggest that this pathway may be therapeutically important in other forms of heart failure, such as seen in pulmonary arterial hypertension. This is an interesting notion since our work shows this is independent of actions on vessels which means that activation of PPARb could be a good adjunct therapy to current drugs acting on vasodilator pathways. In our work we have suggested that the time could be right for a clinical study to assess the effects of PPARb in pulmonary arterial hypertension, since there are orally active drugs available that have already been used man. 35 However, this needs to be treated with extreme caution for two key reasons. Firstly, PPARb drugs may negatively interact with current drugs 29 and secondly PPARb drugs are associated with increased risk of cancer 36,37 and warnings have been issued for their use, particularly directed at sports performance dosing where illicit procurement of drug maybe considered by athletes.
ROLE OF COX-1 AND COX-2 IN PROSTACYCLIN GENERATION AND IMPLICATIONS FOR PULMONARY ARTERIAL HYPERTENSION
As described above, prostacyclin is formed from PGH 2 produced by the enzyme COX. COX has two isoforms COX-1, which is constitutively expressed and COX-2 that is induced at the site of inflammation. 38 COX-1 is the predominate enzyme present in endothelial cells and its loss in vessels virtually abolishes release of prostacyclin. 39 -41 This is also true in conditions of inflammation associated with atherosclerosis. 42 However, COX-2 takes over from COX-1 as the driver for prostacyclin release under conditions of gross systemic inflammation such as that associated with sepsis. 43 Outside large vessels COX-2 is expressed in some key tissues, including in the lung. 40 It is now accepted that pulmonary arterial hypertension is, at least in part, driven by inflammatory cytokines 44 -47 and interferon. 48,49 With this in mind our group was the first to suggest that induction of COX-2 by cytokines may be implicated in pulmonary arterial hypertension. 50 Others showed similar data in cells relevant to pulmonary arterial hypertension. 48,51 -53 If prostacyclin is the main product of cells expressing COX-2 in the lungs in pulmonary arterial hypertension it is likely to form a protective responses. This idea is supported by data showing a detrimental effect of COX-2 gene deletion in mouse models of pulmonary arterial hypertension. 54,55 However, if prostacyclin synthase is overwhelmed and/or if the IP receptor population is saturated, COX-2 will drive a constrictor response and this may explain a protective effect of COX-2 inhibitors in other experimental models. 56 It should be noted however, that there is no evidence to suggest that this phenomenon predominates and the role of COX-2 and associated prostacyclin release in human pulmonary arterial hypertension remains the subject of investigation.
PROSTACYCLIN AS A DRUG TO TREAT PULMONARY ARTERIAL HYPERTENSION
Pulmonary arterial hypertension is rare, but fatal, with mean survival rates without therapy of less than 2 years. The introduction of prostacyclin therapies in the early 1990s has led to increased survival rates to around 5-7 years with some patients living with pulmonary arterial hypertension on prostacyclin therapy for more than 10 years. The features of pulmonary arterial hypertension include reduced prostacyclin/thromboxane balance, constriction, remodeling and thrombosis. With these features in mind the therapeutic utility of prostacyclin (also known as epoprostenol) was assumed very early in the field and in 1984 a placebo controlled trial was conducted where prostacyclin was continuously infused in patients with peripheral vascular disease. 57 This paved the way for a landmark trial in 1996 where prostacyclin, was infused intravenously for 12 weeks in patients with pulmonary arterial hypertension. 23 41 patients received prostacyclin and 40 the conventional treatment at the time, which consisted of anticoagulants, oral vasodilators, diuretic agents, cardiac glycosides, and supplemental oxygen. Exercise capacity, measured using the 6 minute walk test, was improved in all 41 patients treated with prostacyclin but was reduced in the 40 patients treated with conventional therapy. Importantly, mortality was improved in the patients administered prostacyclin. However, serious side effects were noted in the prostacyclin arm, which included catheter associated sepsis. Epoprostenol (FLOLANw) remains a therapeutic option in the treatment of pulmonary arterial hypertension but is seriously limited by its very short half-life at room temperature and side effects associated with the need for continuous infusion requiring permanent intravenous catheter and pump (Figure 3). To address some of these limitations a number of more stable prostacyclin analogues have been developed for the treatment of pulmonary arterial hypertension (Figure 4). These include iloprost (Ventavisw) and treprostinil (Remodulinw), which together with epoprostenol constitute the current prostacyclin therapies in patients with pulmonary arterial hypertension ( Figure 3). Treprostinil, which has similar pharmacodynamics to epoprostenol is more stable and can be administered subcutaneously and intravenously. Iloprost is administered as an inhaled preparation using a nebulizer 6-9 times a day. Treprostinil is also available in and inhaled formulation (TYVASOw; Figure 3) given approximately each 4 hours. A common and important feature of prostacyclin drug therapy is the need for slow, incremental and individualized dosing where the patient is closely monitored for tolerability.
Non-prostacyclin (PPARβ) drugs
• GW0742 (not use clinically to treat PAH) Activate PPARβ Figure 3. Pulmonary arterial hypertension drugs acting on prostacyclin pathways. Synthetic prostacyclin (epoprostenol), injected treprostinil, inhaled treprostinil (itreprostinil), oral treprostinil (otreprostinil) or iloprost are drugs based on the structure of prostacyclin, which activate the IP receptor, but may also activate other prostaglandin receptors (PGRs). Selexipag is a non-prostacyclin drug given orally which selectivity activates the IP receptor. GW0742 is a non-prostanoid, non-IP small molecule drug that activates PPARb in experimental models of pulmonary hypertension. Routes of administration and generally starting titration doses are shown.
PROSTACYCLIN AND COMBINATION THERAPY IN PULMONARY ARTERIAL HYPERTENSION
Despite their effectiveness and because of their limitations and side effects, prostacyclin drugs are generally restricted to patients with pulmonary arterial hypertension and who are in functional class III or IV. 58 Intravenous epoprostenol is often the preferred drug with intravenous treprostinil given as an alternative. Inhaled iloprost is generally reserved for patients for whom intravenous therapy is not acceptable or appropriate. As prostacyclin drugs are reserved for patients with severe pulmonary arterial hypertension in most cases they will be given in combination with either a phosphodiesterase type 5 (PDE5) inhibitor and/or an endothelin receptor antagonist (ETRA). 58 The utility and mechanism of action of PDE5 inhibitors 59,60 and ETRAs 61 are reviewed in detail elsewhere. However, in brief, PDE5 inhibitors work by increasing the bioactivity of endogenously released NO. NO, like prostacyclin, is a vasodilator, but acts on a parallel signaling pathway via activation of soluble guanylate cyclase leading to increases in the second messenger cGMP. PDE5 removes cGMP, thus blocking PDE5 potentiates NO signaling. The effects of NO and prostacyclin are additive in blood vessels 60 and work in powerful synergy in platelets. 60,62 ETRA drugs, on the other hand, work independently of the NO or prostacyclin pathways by blocking the actions of the powerful constrictor peptide endothelin-1. It is not clear how the pharmacology of these three pathways affects particular combinations of drugs in pulmonary arterial hypertension and there are no validated biomarkers that can predict which drugs will work together optimally. However, this is an area of research that our group and others are investigating using endothelial cells grown from blood progenitors allowing insights into vascular function in patients with pulmonary arterial hypertension. 63,64
LIMITATIONS OF PROSTACYCLIN DRUGS IN PULMONARY ARTERIAL HYPERTENSION
As with other treatments for pulmonary arterial hypertension prostacyclin drugs are very expensive with estimated annual costs of $30,000 to more than $200,000 per patient per year in the United States. However, the main limitations of prostacyclin drugs for those that require the drug to be infused are infection and pain at the site of injection. In addition all forms of prostacyclin drugs are associated with side effects such as systemic hypotension, flushing, jaw pain and nausea. Intense research efforts are ongoing to address these limitations and include the development of small molecule, non-prostacyclin, selective IP receptor agonists most notably selexipag (Figure 4). Selexipag is a potent, orally active a pro-drug whose active metabolite, MRE269 (ACT-333679), is a selective prostacyclin IP receptor agonist. Unlike prostacyclin analogue drugs, because selexipag is specific for the IP receptor, it has little or no effect on other prostanoid receptors. This means that where drugs based on prostacyclin structures may be limited by underlying constrictor actions on EP, TP or FP receptors, selexipag targets dilator IP pathways only. However, this high specificity for IP receptors means selexipag will also fail to activate DP receptors and PPARb that may contribute to the efficacy of other prostacyclin drugs. Nonetheless, a phase II proof of concept study showed favorable results 65 and in 2009 the GRIPHON, 66 (Prostacyclin (PGI 2 ) Receptor agonist In Pulmonary arterial HypertensiON) trial was initiated by Actelion to test the utility of selexipag in a randomized, multicenter, double-blind, placebo-controlled trial in patients with pulmonary arterial hypertension. In June 2014 Actelion announced that initial analysis of the GRIPHON study showed that selexipag decreased the risk of a morbidity/mortality and that the overall tolerability profile of selexipag in GRIPHON was consistent with existing prostacyclin therapies. According to the Actelion website 67 in December 2014 marketing authorizations will be submitted to the European Medicines Agency (EMA) for selexipag (Uptraviw) in the treatment of pulmonary arterial hypertension with similar applications pending to the US Food and Drug Administration (FDA). However, even if oral dosing with selexipag proves to be as efficacious as prostacyclin drugs dosed by infusion or inhalation, it is still limited by side effects common to prostacyclin therapy due to its actions on the systemic circulation. In the wake of success with orally active IP-selective selexipag, most recently the FDA approved the first orally active formulation of a prostacyclin drug, treprostinil (Orenitrame). Orenitrame is treprostinil in an extended-release tablet formulation for the treatment of patients with pulmonary arterial hypertension. 68 The approval comes after the FREEDOM studies. 69,70 Whilst current data in patients not previously taking prostacyclin drugs are disappointing, studies show that in some patients oral treprostinil may successfully replace existing use of continuously infused drug. However, as with selexipag, oral dosing does not prevent side effects and future studies and development in formulations will be needed to improve prostacyclin drugs in all their guises.
FUTURE OF PROSTACYCLIN DRUGS IN PULMONARY ARTERIAL HYPERTENSION
Clearly prostacyclin drugs in all their forms have proven utility in pulmonary arterial hypertension but are severely limited by route of delivery and effects of on the systemic circulation. Attempts to circumvent the need for drug infusion have been successful with drugs such as inhaled treprostinil and iloprost and orally active selexipag, but the systemic side effects remain the limitation in realizing the full potential of this class of drugs. One approach being adopted in other human diseases is nanomedicine, where targeted drug delivery can improve efficacy and over come side effects ( Figure 5). The use of nanomedicine technology has, in some cases, revolutionized drug formulations for treatment of cancer. 71 Nanomedicine is a relatively young science and can be defined as the medical application of nanotechnology, in the case of drug delivery systems this equates to the use of formulations in the nanometer range. As the field grows the types of potential formulations suitable to encapsulate drugs increases. The idea that this technology can be applied to drugs for pulmonary arterial hypertension was recently reviewed 72 but the idea remains relatively novel and untested. Nevertheless, we suggest that the following approaches may solve the current limitations of prostacyclin drugs. Firstly a safe and effective encapsulation of prostacyclin drug within a suitable nanoparticle to evade the systemic circulation is required. This may be enough to allow specific targeting of pulmonary vessels if similar characteristics of local tissue environment exist to those in tumors. In tumors some nanomedcines can accumulate because of increased vascular leak and reduced lymphatic drainage. However, in the case of specific delivery of a prostacyclin drug to affected pulmonary vessels additional molecular engineering may be required. One approach to this would be to use an antibody-drug conjugate ( Figure 5). Here it would first be necessary to identify a specific antigen expressed locally within pulmonary vessels, manufacture and humanize the antibody. This may be possible by using comparative systems approaches such as proteomics, recently used to identify translationally controlled tumor protein (TCTP) as a marker of pulmonary arterial hypertension. 64 These, of course, are not trivial tasks and would require the concerted efforts of chemists, bioengineers, pharmacologists and clinicians.
FUTURE APPLICATION FOR THE PROSTACYCLIN PATHWAY IN STEM CELL AND ORGAN REGENERATION THERAPIES
Current therapies have had dramatic effects at increasing the life expectancy of patients with pulmonary arterial hypertension. However, ultimately, in most cases, these fail and in some patients having a lung transplant is the only therapeutic option. Needless to say this is not a perfect solution nor is it one that can benefit most patients. With this in mind there are increasing efforts in the use of stem cell therapy to treat pulmonary arterial hypertension. This may be either at the level of giving stem cells in an attempt to repopulate the diseased vessels in the pulmonary vasculature or, at the most ambitious end of the spectrum, to grow lung tissue in bio incubators for transplant. Prostacyclin pathways play a potentially important role in these approaches. Any stem cell therapy in pulmonary arterial hypertension would require a fully functioning COX/prostacyclin synthase pathway and would similarly require fully functioning prostacyclin receptors to be present. This type of approach in stem cell and gene therapy has been reviewed elsewhere 73 y but remains very much at the theoretical and experimental stage.
SUMMARY AND CONCLUSIONS
Prostacyclin is a multifaceted cardioprotective hormone released by the endothelium. Since its discovery in the 1970s prostacyclin has been the subject of thousands of publications yet we are still discovering new insights into its biology and pharmacology. Prostacyclin remains arguably the most effective therapy for patients with pulmonary arterial hypertension but current drugs based on its pharmacology have serious limitations. It is hoped that in the future specific targeting of prostacyclin drugs alone, or in combinations with other medications can resolve these limitations and allow for less frequent but more effective administration of high doses of drug, that will, if not cure this disease, at least convert it to an effectively managed non-fatal condition. | 2016-10-26T03:31:20.546Z | 2014-12-31T00:00:00.000 | {
"year": 2014,
"sha1": "074d38e37353c8c27508b9c746b5eea0580143ef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5339/gcsp.2014.53",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9b439b8448e10a5473c2b7764d36999364a24c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239645417 | pes2o/s2orc | v3-fos-license | Exploring the relationship between writing anxiety and writing self- efficacy of international students learning Turkish as a second language
The purpose of this correlational study was to explore the relationship between writing anxiety and writing self-efficacy levels of international students learning Turkish as a second language. Data were collected from a convenience sample of 204 international students through “Writing Anxiety Scale for Learners of Turkish as a Foreign Language Scale”, “Writing SelfEfficacy Scale for Students Learning Turkish as a Foreign Language” and a personal information form. In the analyses of the data, descriptive statistics, Mann-Whitney U test, Kruskal-Wallis-H test, Spearman's Rank-Difference Coefficients of Correlation were used. In this study, international students were found to have medium levels of writing anxiety and high levels of writing self-efficacy. Analyses indicated that male students had higher levels of action-oriented writing anxiety than female students. It was also found that doctoral students had higher levels of action-oriented writing anxiety than undergraduate students. Lastly, it was determined that there was a low and positive correlation between international students' writing self-efficacy and action-oriented writing anxiety.
Introduction
Writing skill is a multidimensional skill known as one of the hardest skills to acquire and learn compared to other language skills (Allen & Corder, 1974;Cook, 2013;Mah & Khor, 2015). Therefore, writing often turns into a difficult process for both second (SL) or foreign language (FL) learners and first language (L1) learners (Belet & Yaşar, 2007;Idris, 2009;Mah et al., 2017). This also applies to for international students who come to Turkey from different countries and learn Turkish as a second language (TSL) (Tiryaki, 2013;Altunkaya & Ateş, 2017). Writing skill has a complex structure consisting of cognitive, affective and psychomotor dimensions. However, since more cognitive processing is needed in writing skill compared to other language skills, attention is generally drawn to the cognitive aspect of writing (Karakaya & Ulper, 2011;Şen & Boylu, 2017). On the other hand, the affective dimension of writing, which directly or indirectly affects both the cognitive and psychomotor dimensions of writing, is also of particular importance (Zabihi, 2018). The affective dimension of writing, which consists of various factors such as motivation, self-efficacy perception and disposition, can affect the writing process in different ways (McLeod, 1987;Cheng, 2002;Şen & Boylu, 2017).
Of many affective factors, anxiety can cause difficulties in language learning (Balta, 2018;Blasco, 2016;Horwitz, Horwitz & Cope, 1986;Stewart, Seifert & Rolheiser, 2015) by occupying working memory capacity and affecting cognitive processing negatively (Terry, 2017). Since writing is a complex cognitive activity, learning to write in a SL might cause as much anxiety as the other language skills (Tsui, 1996). Daly and Miller (1975) introduced the concept of writing anxiety (WA) in order to express the anxiety that individuals encounter while writing and developed a scale to measure WA. Their study had an important role in showing the extent of the effect of WA (Cheng, Horwitz & Schallert, 1999) and led an increasing number of studies to be carried out on WA. The "Foreign Language Classroom Anxiety Scale" study by Horwitz et al. (1986) was the reason why anxiety studies gained momentum in the field of FLs. Through this study, attention was drawn to anxiety in FL classes and it turned into a field of study. According to Cheng et al. (1999) who studied the relationship between WA and FL class anxiety, FL class anxiety expresses a more general anxiety, while WA expresses an anxiety limited to language skills.
Literature review
Writing anxiety (WA) is basically a term used to describe the stress or anxiety that students experience during the writing process (Blasco, 2016). It is a reactive situation towards writing. WA reflects individuals' desire to write (Faigley, Daly & Witte, 1981;Zorbaz, 2011;Maden, Dincel & Maden, 2015), success in writing activities (Aytaç-Demirçivi, 2020;Balta, 2018;Maden et al., 2015;Jalok & Idris, 2020), and writing avoidance (Faigley et al., 1981, Zorbaz, 2011. WA can occur at every stage of the writing process. According to Şen andBoylu (2017, p.1126), WA "can occur before, during and after the act of writing, and is closely related to the learner's past life and learning experiences". In a classroom, anxiety may arise from the student, teacher or teaching style (Jawas, 2019;Young, 1991). For this reason, the sources of WA can also differ (Waer, 2021;Lipsou, 2018). The related studies have indicated some of the sources of anxiety in the writing process are the different dimensions of writing (Karakuş Tayşi, 2018), negative self-perception, negative experiences and inadequate writing skills (Ekmekçi, 2018;Zorbaz, 2011), negative criticism of written products (Zorbaz, 2011;Aytan & Tunçel, 2015), failure in writing classes (Aytan & Tunçel, 2015).
In the literature, there are studies that examine different dimensions of writing skill and reveal various types of WA (Aytan & Tunçel, 2015;Şen & Boylu, 2017). Within the scope of this study, international students' WA was examined and addressed in two dimensions which Şen and Boylu (2017) revealed in their study. The former is the "action-oriented writing anxiety" (AOWA) which expresses the feelings students feel while writing, the pleasure they get from writing and their motivation for writing (Şen & Boylu, 2017). The latter is the "environment-oriented writing anxiety" (EOWA) which expresses, in the most general sense, the discomfort and anxiety caused by teachers and other learners in the learning environment (Şen & Boylu, 2017).
Another affective factor playing a significant role in language learning is learners' perception of selfefficacy. Self-efficacy is one of the main variables that provide motivation in language learning (Woodrow, 2011) and represents an individual's belief in her/his own capacity to accomplish a task (Bandura, 1977). Stating that Bandura's definition is the basis of many different definitions of selfefficacy, Buyukikiz (2012) expresses that self-efficacy perception reflects the beliefs of individuals about what they can do. In other words, the perception of self-efficacy refers to a future situation rather than the past. The results of the research on self-efficacy in the context of SL learning point out that self-efficacy is a significant factor affecting learners' interest, determination, motivation, effort to learn, the goals they choose to follow, the usage of self-regulation strategies while carrying out a task, and their success in learning (Pajares, 1996(Pajares, , 2003Linnenbrink & Pintrich, 2003;Sabti, Rashid, Nimehchisalem & Darmi, 2019;Schunk, 2003;Lane, Lane, & Kyprianou, 2004;Raoofi, Tan & Chan, 2012). Writing self-efficacy (WSE), which covers a more limited area, can be defined as an individual's own belief in her/his potential to fulfill writing activities. WSE can shape writing behaviors by affecting writing skill in different ways (Schunk & Zimmerman, 2007) and can affect writing performance (Woodrow, 2011). In fact, individuals with high self-efficacy perception stand out in dealing with difficulties, working and achieving high success (Schunk, 2003;Schunk & Zimmerman, 2007;Buyukikiz, 2012). Studies have revealed that individuals with high level of self-efficacy tend to make use of writing opportunities, pay more attention to and put more effort on writing, are more insistent on the improvement of their writing skills, and have a better writing performance (Sawyer, Graham & Harris, 1992;Pajares & Johnson, 1996;Bandura, 1997;Mahyuddin et al., 2006;Tan, 2006;Shah, Mahmud, Din, Yusof & Pardi, 2011).
There are a number of studies in the literature examining various aspects of writing skills of international students who come to Turkey from different countries and learn Turkish as a second language (TSL). In the field of teaching TSL, it is notable that studies on learners' WA and WSE in the context of writing skills are quite limited. There are some scale studies carried out to measure the WA of those who learn TSL depending on their language levels (Maden et al., 2015). For instance, there are different WA scales developed for basic level (Aytan & Tunçel, 2015), intermediate level (Karakuş Tayşi, 2018) and intermediate-advanced level (Şen & Boylu, 2017). It is also possible to make various inferences about WA and WSE from different studies. It was revealed that those who learned TSL were anxious while writing (Maden et al., 2015). It was determined that students' native languages cause WA due to having different alphabets and syntaxes (Akbulut, 2016) and their anxiety levels differ according to their nationalities (Maden et al., 2015). However, no significant gender difference was found in both WSE and WA (Buyukikiz, 2011;Maden et al., 2015;Akbulut, 2016;Altunkaya & Ateş, 2017;Erdil, 2017). Similarly, no significant difference was found among language levels in WA and WSE of the learners of TSL (Akbulut, 2016;Erdil, 2017). Significant relationships were found between WA and attitude towards writing (Akbulut, 2016), and between perceptions of WSE and writing skills (Buyukikiz, 2011). It was determined that there was a high level of relationship between creative writing skills of learners of TSL and their self-efficacy perceptions (Melanlioğlu & Demir Atalay, 2016b) while the use of reflective diary causes changes in the sub-dimensions of WSE (Melanlioğlu & Demir Atalay, 2016a). It was also found that planned writing activities had a positive effect on students' WA, WSE and achievement (Çocuk & Yanpar Yelken, 2021).
Research questions
This study aims to explore the relationship between WA and WSE levels of international students who come to Turkey from different countries and learn TSL. Therfore, answers will be sought to the following research questions: • What are the levels of WA and WSE of international students learning TSL?
• Are there any statistically significant differences in WA and WSE levels of international students learning TSL according to gender, education level, whether they like to write and whether they do their writing assignments regularly? • Are there statistically significant relationships among AOWA, EOWA and WSE levels of international students learning TSL?
Method
This current study was carried out as a correlational study that is generally undertaken "to look for and describe relationships that may exist among naturally occurring phenomena, without trying in any way to alter these phenomena" (Fraenkel & Wallen, 2009, p.11). In this study, it is aimed to explore the relationship between WA and WSE levels of international students learning TSL.
Sample
The study population consisted of international students enrolled in different universities in Turkey in the academic year of 2017-2018. The study sample, which was determined using the convenience sampling method, consists of 204 international students. Descriptive statistics regarding the demographic characteristics of the students who make up the sample are given in Table 1 below. As seen in Table 1, 121 (59.3%) of the students are male and 83 (40.7%) are female. All students receive language education at C1 level according to the Common European Framework of Reference for Languages (CEFR). 126 (61.8%) of the students are enrolled in undergraduate programs, 57 (27.9%) in master's and 21 (10.3%) in doctoral programs.
Instruments
Two different scales were administered to the students to collect the data. The first one is "Writing Self-Efficacy Scale for Students Learning Turkish as a Foreign Language" developed by Gungör and Kan (2015). It was stated that the Cronbach Alpha reliability coefficient of this scale consisting of one factor and 14 items was .95 (Gungör & Kan, 2015). The internal consistency coefficient calculated from the data of this study is .94. The other scale is "Writing Anxiety Scale for Those Who Learn Turkish as a Foreign Language" developed by Şen and Boylu (2017). It was stated that the Cronbach Alpha reliability coefficient of this scale consisting of two factors and 13 items, was .84 and 46.82% of the total variance was explained (Şen & Boylu, 2017). While the internal consistency coefficients calculated from the data of this study are .68 for the scale; .83 for the factor named "Action-Oriented Writing Anxiety" (AOWA) and .66 for the factor named "Environment-Oriented Writing Anxiety" (EOWA). In addition, the "Personal Information Form" prepared by the researchers was used to collect data about students' demographic characteristics, whether they like to write and whether they do their writing assignments regularly.
Data analysis
The compatibility of the data collected through the scales with the tests to be performed was examined with the Kolmogorov-Smirnov normality test and it was determined that the AOWA and EOWA scores and the WSE scores of the students did not indicate normal distributions (p <.05). Therefore, the followings were used in the analyses of the data: • frequency (f) and percentage (%) to describe students' demographic characteristics, • descriptive statistics to reveal the distributions of AOWA, EOWA and WSE levels, • Mann-Whitney U test to determine whether students' AOWA, EOWA and WSE levels differ statistically significantly according to gender, whether they like to write and whether they do their writing Assignments regularly, • Kruskal-Wallis-H test to determine whether students' AOWA, EOWA and WSE levels differ statistically significantly according to students' education levels, • Spearman's Rank-Difference Coefficients of Correlation to determine whether there are statistically significant relationships between AOWA, EOWA and WSE levels of the students. • 3. Results
WA and WSE levels of international students learning TSL
Descriptive statistics regarding WA and WSE levels of international students learning TSL are given in Table 2. The mean scores of the students in Table 2 show that their AOWA and EOWA levels are moderate whereas their WSE level is high.
WA and WSE levels of international students learning TSL according to their gender
The results of the Mann-Whitney U test that was conducted to test whether the WA and WSE levels of male and female students indicate any significant difference are given in Table 3. The analysis results in Table 3 show that there is a significant difference only in AOWA levels of the students (U = 4095.50, p=.03<.05). Calculated mean ranks indicate that male students have a higher level of AOWA than female students.
WA and WSE levels of international students learning TSL according to their education levels
The results of the Kruskal-Wallis H test that was conducted to test whether the students' level of WA and WSE differ significantly according to their education levels are given in Table 4. The analysis results in Table show that there is a significant difference only in the AOWA levels of the doctoral students and undergraduate students (x2=6.10, p=.04<.05). In other words, doctoral students have a higher level of AOWA than undergraduate students.
WA and WSE levels of international students learning TSL according to whether they like to write
The results of the Mann-Whitney U test that was conducted to test whether the students' levels of WA and WSE differ significantly according to whether they like to write are given in Table 5. The analysis results in Table 5 show that there is a significant difference only in the AOWA levels of the students (U = 1102.00, p=.00<.05). The calculated rank means show that students who like to write have a higher level of AOWA than students who do not like to write.
WA and WSE levels of international students learning TSL according to whether they do writing assignments regularly
The results of the Mann-Whitney U test that was conducted to test whether the students' WA and WSE levels indicate a significant difference according to whether they do writing assignments regularly are given in Table 6. The analysis results in Table 6 show that there are significant differences in the students' AOWA levels (U = 2480.50, p=.00<.05) and WSE levels (U = 2956.50, p=.03<.05). The calculated rank means show that both the AOWA levels and the WSE levels of the students who do their writing assignments regularly are higher than the students who do not do their writing assignments regularly.
The relationship between AOWA, EOWA and WSE levels of international students learning TSL
The Spearman's Rank-Difference Coefficients of Correlation, which were calculated to test whether there are statistically significant relationships among the AOWA, EOWA and WSE levels of the students, are given in Table 7. As seen in Table 7, there is a low and positive correlation between the AOWA and WSE levels of the students (rs=.23, p<.01).
Discussion
The affective dimension of writing has a crucial role in the improvement of writing skills of international students who learn Turkish as a second language (TSL). Therefore, affective factors of writing such as anxiety, attitude, and self-efficacy have been the subject of many studies and have been examined in various dimensions in different samples (Buyukikiz, 2011;İşcan, 2015;Maden et al., 2015;Altunkaya & Ateş, 2017;Erdil, 2017;Polatcan, 2019). In this study, international students' WA and WSE were examined and interpreted in in the context of learning TSL.
As one result of the analyses conducted within the scope of this research, it was revealed that international students' WA was at a moderate level. Based on this finding, it may be stated that WA level of the students learning TSL was at a desired level. Because low or high anxiety level has the potential to negatively affect students' language learning success, writing skills and performances (Horwitz, 1986;Horwitz et al., 1986). Similarly, low anxiety indicates insufficiency of general arousal state whereas high anxiety is considered harmful because it causes behaviors such as stress, fear and avoidance (Maden et al., 2015). Therefore, the fact that students' WA is at a moderate level is a positive finding in terms of learning and development processes of writing skills in TSL. When the literature is examined, it can be seen that there are different study results regarding this finding. For instance, Maden et al. (2015) state in their study that international students mostly experience high levels of WA while İşcan (2015) state that the somatic and social anxiety levels of Jordanian students in their writing skills are high but their cognitive anxiety levels are low.
Another important finding of this research was that WSE level of international students was high. It can be pointed out that students considered themselves competent in writing in TSL and knew their strengths and weaknesses in their writing skills (Taş & Balci, 2019). In addition, it can also be concluded that these students with high WSE perceived themselves as talented and successful in writing (Pajares & Valiante, 1996). Parallel to this finding of the study, Altunkaya and Ateş (2017) state that students have an above-average perception of WSE while Erdil (2017) reports that students have a moderate level of WSE perception.
In this study, while there was no significant difference in EOWA and WSE levels of international students according to their gender; male students had a higher level of AOWA than female students. Based on this finding, it can be assumed that males were more anxious about writing than females. The studies in the related literature indicate that different results are obtained on this subject. For instance, Maden et al. (2015) and Akbulut (2016) state that gender does not cause a difference in WA whereas Cheng (2002) claims that gender causes a change in WA and that female students are more anxious. A similar situation is observed in students' perception of WSE. Buyukikiz (2011), Akbulut (2016, Altunkaya and Ateş (2017) and Erdil (2017) state that gender does not play a role in students' perceptions of WSE. However, Buyukikiz (2011) states that female students' WSE is higher than male students although the difference is statistically insignificant. A similar finding was encountered in this study, and it was found that the WSE level of female students was higher than that of male students even though the difference was statistically insignificant. While no significant difference was observed in EOWA and WSE levels of international students according to their education levels, doctoral students had higher levels of AOWA than undergraduate students. It is thought that this difference may be due to the quality of writing products expected from doctoral students. Similar to the finding on WSE, Erdil (2017) states that there is no significant difference in WSE according to students' education level.
It was observed that there was a significant difference only in AOWA levels of international students according to their liking to write. It was determined that students who liked to write had a higher level of AOWA than students who did not like to write. This finding can be interpreted as that students who like to write, although they have high WSE, are more anxious while writing because they are careful to write well.
Another finding was that there were significant differences in AOWA and WSE levels of international students according to whether they did their writing assignments regularly. It was seen that both the AOWA and WSE levels of the students who did their writing assignments regularly were higher than the students who did not. Buyukikiz (2011) reports a similar finding that the self-efficacy perceptions of students who do extracurricular writing activities are higher than those who do not. It can be assumed that students' regular writing assignments contribute positively to their WSE. Since the writing skill develops by practicing writing, students' perceptions of competence towards their writing potential also develop. In addition, it is expected that those who do their writing assignments regularly have high WSE. According to this finding, doing writing assignments regularly could be a variable that might cause an increase in action-oriented WA. This may be due to the fact that students' expectations and goals in the writing process increase as they do their writing assignments regularly.
Relationships among the variables of this study were examined, and a low and positive correlation was found between the AOWA and WSE levels of international students. Based on this finding, it can be concluded that students' WSE levels increase as their AOWA levels increase even though the change is at a low level. In other words, this finding indicates that although students found their writing skills in TSL sufficient, they still worried about the act of writing.
Conclusions
In this study, international students' WA and WSE were examined and interpreted in in the context of learning TSL. International students' WA was at a moderate level whereas their WSE level was high. There was no significant difference in EOWA and WSE levels of international students according to their gender. But male students had a higher level of AOWA than female students. EOWA and WSE levels of international students did not differ significantly according to their education levels. But doctoral students had higher levels of AOWA than undergraduate students. There was a statistically significant difference only in AOWA levels of international students according to their liking to write. There were significant differences in AOWA and WSE levels of international students according to whether they did their writing assignments regularly. There was a low and positive correlation between the AOWA and WSE levels of international students.
Recommendations for Further Studies
Based on the results of this study which emphasizes the affective dimensions of writing skill, some suggestions can be made. For instance, it may be suggested for teachers who teach TSL to international students that they make use of learning and writing activities that will help students have high levels of WSE and medium levels of WA in the learning and teaching process. It may also be suggested that students be given opportunities to participate in writing activities prepared according to student-centered and communication-based approaches, and receive constructive and encouraging feedback and corrections to their writing products. Hence, it can increase students' self-confidence and motivation for writing and contribute to the development of their writing skills. For further studies, it can be suggested that a similar study with a larger sample that can best represent international students in Turkey can be conducted in order to be able to draw more robust and general conclusions. In addition, studies using different research methods such as the mixed method can be studied in more detail and in depth. Finally, examining other cognitive, affective and psychomotor variables that are thought to affect the writing skills of international students will make great contributions to the related literature. | 2021-09-24T15:24:59.550Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "c81004de09c3b00a66630e737ab1ab5f7c5dd7b7",
"oa_license": null,
"oa_url": "https://un-pub.eu/ojs/index.php/cjes/article/download/6071/7875",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "63c2167bd32d767de547a4c3d8d6a5d2f8a18a68",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
8506639 | pes2o/s2orc | v3-fos-license | Genomic Expression Libraries for the Identification of Cross-Reactive Orthopoxvirus Antigens
Increasing numbers of human cowpox virus infections that are being observed and that particularly affect young non-vaccinated persons have renewed interest in this zoonotic disease. Usually causing a self-limiting local infection, human cowpox can in fact be fatal for immunocompromised individuals. Conventional smallpox vaccination presumably protects an individual from infections with other Orthopoxviruses, including cowpox virus. However, available live vaccines are causing severe adverse reactions especially in individuals with impaired immunity. Because of a decrease in protective immunity against Orthopoxviruses and a coincident increase in the proportion of immunodeficient individuals in today's population, safer vaccines need to be developed. Recombinant subunit vaccines containing cross-reactive antigens are promising candidates, which avoid the application of infectious virus. However, subunit vaccines should contain carefully selected antigens to confer a solid cross-protection against different Orthopoxvirus species. Little is known about the cross-reactivity of antibodies elicited to cowpox virus proteins. Here, we first identified 21 immunogenic proteins of cowpox and vaccinia virus by serological screenings of genomic Orthopoxvirus expression libraries. Screenings were performed using sera from vaccinated humans and animals as well as clinical sera from patients and animals with a naturally acquired cowpox virus infection. We further analyzed the cross-reactivity of the identified immunogenic proteins. Out of 21 identified proteins 16 were found to be cross-reactive between cowpox and vaccinia virus. The presented findings provide important indications for the design of new-generation recombinant subunit vaccines.
Introduction
The genus Orthopoxvirus (OPV) from the family Poxviridae contains complex viruses which replicate entirely in the cytoplasm of the infected cell [1,2]. Their linear double-stranded DNA genome of up to 220 kbp [1] contains no introns and encodes more than 200 open reading frames (ORFs) [3]. The genus is best known for two of its most prominent species: vaccinia virus (VACV) and variola virus (VARV). Interestingly, VACV was used to eradicate VARV, the causative agent of smallpox, through a worldwide vaccination campaign [4,5]. This was possible due to an antigenic relationship between members of OPVs. An earlier infection with one of these members provides some protection against subsequent infections with the others [6]. Nevertheless, the declaration of the successful eradication of smallpox in 1980 [7] led to the discontinuation of the routine smallpox vaccination [8] due to the risk of rare but severe adverse reactions [9,10]. Other human-pathogenic OPV members include monkeypox virus and cowpox virus (CPXV) [1], the latter having the largest genome of all OPVs [11].
CPXV is prevalent in Western Eurasia and has an extremely broad host range [4,12]. Human cowpox is a zoonotic disease, usually transmitted by cats, which mostly causes self-limiting local infections [13]. However, severe clinical courses resulting in prolonged treatment and scarring have been described [13,14]. Furthermore, a case of generalized, fatal CPXV infection in an immunocompromised patient with a life-long history of atopic dermatitis has been reported [15]. Human cowpox particularly affects young people [16], indicating that the lack of smallpox vaccination may render today's population more susceptible to OPV infections including cowpox [17,18]. At the same time there is no approved effective antiviral treatment available, and conventional smallpox vaccines recently administered can cause rare but severe adverse reactions [5], notably affecting immunodeficient individuals and those with atopic dermatitis [19,20]. Unfortunately, this group is already at a higher risk to develop OPV infections with a severe clinical course. Therefore, there is a pressing need for the development and approval of safer and more effective vaccines [17].
Recombinant subunit-based vaccines represent possible alternatives to the conventional smallpox vaccines [21]. To develop these vaccines, it is necessary to identify those antigens inducing the most effective immune response. Several antigenic VACV proteins and combinations thereof have been successfully tested in animal models [22][23][24]. However, beside individual immune responses, the genetic diversity of poxviruses has to be taken into account when designing subunit vaccines [25]. It has long been known that VACV and CPXV show immunological similarities as well as differences [26][27][28]. However, as yet little is known about the cross-reactive CPXV antigens. The inclusion of antigens that are cross-reactive to several OPV species in subunit vaccines could potentially help confer a resilient immune response [21] and prepare more effective subunit vaccines.
In this study, we first constructed and evaluated four different genomic bacteriophage l-based expression libraries (EL) containing the VACV and CPXV genomes. The EL were then serologically screened using sera from VACV-immunized humans and animals as well as clinical serum samples from CPXV-infected humans and animals. Through these screenings we were able to identify 21 immunogenic proteins of CPXV and VACV. The identified proteins show diverse functions and a genome-wide distribution, with surface proteins as well as non-surface (structure) proteins being present. By analyzing the whole set of antigens, we found 16 out of 21 proteins to be cross-reactive between CPXV and VACV. Six of these 16 cross-reactive proteins are also perfectly conserved among all OPVs. Seven cross-reactive proteins are proposed to be tested as components of subunit vaccines. We therefore describe a low-priced approach to antigen discovery which is especially well suited for investigation of large DNA viruses. The approach described is independent of a prior knowledge of antibody targets. Software-based ORF prediction and primer design are thus not required. The method described is therefore suitable for the identification of cross-reactive proteins shared by further clinically relevant OPV species beside those of VACV and CPXV. The integration of clinical serum samples in the screening experiments in addition to sera obtained from immunized individuals further provides a more authentic situation, allowing the identification of the most antigenic proteins. These proteins might be especially well suited for inclusion in subunit vaccines.
Construction and validation of genomic OPV expression libraries
The data presented here were derived from serological screenings of four genomic OPV EL varying in the genome species expressed and insert size. The genomic EL were constructed by cloning fragments of genomic OPV DNA into a modified bacteriophage l-based vector (ZAP Express). This vector can accommodate DNA fragments with a length of up to 12 kb. The cloned DNA fragments can be excised out of the phage in the form of the kanamycin-resistant pBK-CMV phagemid vector, which allows to characterize insert DNA in a plasmid system. Two of the EL described were constructed using partially digested CPXV genomic DNA of strain GuWi and designated according to insert size: EL-CPXV-0.2k-0.7k and EL-CPXV-0.2k-3k. Additionally, two EL containing the VACV genome strain New York City Board of Health (NYCBH) were constructed and designated EL-VACV-3k-12k and EL-VACV-0.2k-0.7k. Exemplary, EL-CPXV-0.2k-3k is an EL containing 0.2-3 kb inserts of a partially digested CPXV genome. The recombinant titer of the constructed EL ranged from 3610 6 to 2610 9 plaque forming units per milliliter.
For the validation process recombinant plaques were randomly picked and the insert DNA sequenced. The obtained sequences were aligned with an appropriate OPV genome. Figure S1 schematically shows the distribution of the sequences on an OPV genome for EL-VACV-3k-12k (A) and EL-CPXV-0.2k-0.7k (B). For further validation of their complexity the constructed libraries were screened with monoclonal (anti-rA27, anti-CPXV 3D11) and polyclonal (goat anti-rA27) antibodies. First, the libraries were screened with the monoclonal anti-CPXV 3D11 antibody. This antibody was generated against native CPXV particles [29]. Through library screenings, immunopositive signals for the expected antigen could be identified ( Figure S2), as confirmed by sequencing. The anti-rA27 antibodies were first tested in an ELISA system. Here they were shown to recognize virus particles as well as the recombinant protein expressed in E. coli that was used to immunize goats and mice ( Figure S3). Through library screenings, positive signals for the expected antigen were identified for all tested antibodies ( Figure 1A-D), as confirmed by sequencing.
To show the functionality of the screening system, several controls were performed ( Figure 1E-H). For the identification of the most antigenic proteins and a maximum reduction of the background signal all sera were diluted 1/200 to 1/1000. All human sera were preincubated with E. coli lysate to remove anti-E. coli antibodies that might potentially be present. Thus, after reducing the background signal, a clear differentiation between positive and negative signals was possible ( Figure 1E). No positive Figure 1. Validation of constructed genomic expression libraries. For a validation of their complexity the constructed genomic EL were serologically screened with polyclonal and monoclonal antibodies of known specificities. Immunopositive signals are exemplary indicated through white arrows for the following screening combinations: (A) EL-CPXV-0.2k-0.7k with polyclonal goat anti-rA27 serum, (B) EL-CPXV-0.2k-3k with monoclonal mouse anti-rA27 antibody, (C) EL-VACV-3k-12k with goat anti-rA27, and (D) EL-VACV-0.2k-0.7k with goat anti-rA27. Further evaluation was performed by immunoscreening a genomic CPXV EL with different sera and controls: (E) Immunoscreening of EL-CPXV-0.2k-3k using serum from a VACV-immunized rabbit. The white arrow points to an immunopositive plaque, the black arrow to an immunonegative plaque. signals resulted from an incubation with the appropriate secondary antibody alone ( Figure 1F). In addition, for a demonstration of selective specificity, the EL were screened using poxvirus-naive serum ( Figure 1G) as well as a poxvirus-naive serum with a high anti-dengue virus titer ( Figure 1H). No positive signals were seen in the control screenings.
Identification of immunogenic VACV proteins
For the construction of EL, genomic OPV DNA was partially digested with the Bsp143I restriction enzyme and cloned into the ZAP Express vector. Through serological screenings immunoreactive phage clones were identified and the insert DNA sequenced. The DNA sequences obtained were aligned with an appropriate reference OPV genome for the identification of the encoded protein(s). For convenience, the genes encoding the identified immunogenic proteins are indicated according to the established convention of naming VACV genes or ORFs. This consists of using the HindIII restriction endonuclease DNA fragment letter (A-P), followed by the ORF number within the fragment, and L or R, depending on the direction of the ORF [2]. When inserts encode more than one protein, no precise identification of the antigenic protein is possible. In that case, the position of the identified genomic region is indicated through the respective HindIII fragment letter.
The constructed genomic EL were screened with anti-OPV sera to scan the humoral immune response from vaccinated or infected humans and animals. EL containing the VACV genome were used to identify immunoreactive proteins of VACV. This should serve as a proof of principle for the functionality of the EL prior to the identification of CPXV-reactive proteins. The serological screening of EL-VACV-3k-12k using sera from immunized humans (Vaccinia immune globulin, VIG) and rabbits as well as clinical human sera resulted in the identification of several immunoreactive plaques. After sequencing of the insert DNA, immunodominant genome regions encoding several proteins could be identified. One of the most frequently detected genomic regions was in the HindIII A fragment, which almost always included the gene WR148, the VACV orthologue of the cowpox A-type inclusion protein (A25). Proteins encoded by HindIII fragments D and C/B could also be identified.
The screening of EL-VACV-3k-12k did not result in the precise attribution of immunoreactive plaques to individual proteins, owing to the fact that the relatively long inserts often encode more than one protein. To circumvent this problem, EL-VACV-0.2k-0.7k was constructed. The serological screening of this EL using rabbit antiserum resulted in the identification of the immunoreactive proteins A18 (DNA helicase), A47 (hypothetical protein), B2 (function unknown), and E3 (double-stranded RNA-binding protein) ( Table 1).
Identification of immunogenic CPXV proteins
EL-CPXV-0.2k-0.7k and EL-CPXV-0.2k-3k were serologically screened to identify immunoreactive proteins of CPXV. A wide range of immunogenic proteins could be identified by screening both EL using sera from different species (Table 1). Only those immunoreactive clones encoding a single protein are included in Table 1. Immunoreactive clones with inserts encoding multiple proteins were also identified. These are not listed in Table 1 but are provided below. Screening of EL-CPXV-0.2k-3k using serum from a CPXV-infected cat resulted in the identification of immunoreactive clones encoding two proteins B19/B20 (NCBI accession numbers: NP_619993/NP_619994). Screening of the same library using anti-CPXV rat serum led to immunoreactive clones encoding the protein combinations A41/A42 (NCBI accession numbers: NP_619960/NP_619961), A43/no homologue (NCBI accession numbers: NP_619962/NP_619963), and I4/I5 (NCBI accession numbers: NP_619870/NP_619871).
OPV proteins with diverse functions are immunogenic
The first main discovery of our serological screening was that immunogenic proteins can have most diverse functions. Interestingly, a few enzymes including A18 (DNA helicase), A48 (Thymidylate kinase), E9 (DNA polymerase), and H6 (DNA topoisomerase type I) were found to be immunogenic. Moreover, several ankyrin-like proteins (M1, B18, and B20) were found to be immunoreactive. A number of immune evasion proteins including A53 (tumor necrosis factor receptor, CrmC), C23 (chemokinebinding protein), and E3 (double-stranded RNA-binding protein) were also found to be antigenic. According to their different functions, the detected immunogenic OPV proteins varied in their location within the virus, ranging from core proteins (A3, A4) to membrane proteins (B22), including structural proteins (M1, B18, B20) as well as enzymes. Interestingly, there was no trend toward recognition of antigens with early or late gene expression.
Genes encoding immunoreactive proteins are widely dispersed on an OPV genome
We also analyzed the distribution of the genes encoding immunoreactive proteins on a generalized OPV genome map ( Figure 2). For simplicity, we again depicted an OPV genome containing HindIII restriction endonuclease DNA fragment letters (A-P). HindIII DNA fragments with genes encoding identified immunoreactive proteins are highlighted with a respective ELspecific icon. Figure 2 gives a detailed overview of the proportion of the OPV genome encoding immunogenic proteins. It contains the identified genes listed in Table 1 as well as the information on immunoreactive clones with inserts encoding two or more genes. The immunodominant genes were found to be part of nearly all different HindIII restriction fragments, including genes located in the central region of the genome as well as those in the terminal regions.
VACV and CPXV cross-reactive proteins A4 and E3 are most antigenic
The constructed VACV and CPXV EL were also utilized to determine cross-reactive OPV antigens. For this purpose VACV EL were screened using anti-CPXV sera, and CPXV EL were screened using anti-VACV sera. To be able to identify crossreactive proteins, the identified antigens were grouped into four subsets depending on the experimental conditions: 1. CPXV EL with anti-VACV sera, 2. CPXV libraries with anti-CPXV sera, 3. VACV libraries with anti-VACV sera, and 4. VACV libraries with anti-CPXV sera ( Figure 3). All proteins attributed to subsets 1 and 4 represent cross-reactive antigens of VACV and CPXV. Moreover, the presence of a protein in more than one subset indicates a higher antigenicity. By determining the intersections of all subsets, protein A4 was found to be present in all four subsets. In addition, protein E3 was present in two of four subsets. We therefore conclude that out of the 16 proteins identified, A4 and E3 are the most antigenic cross-reactive proteins between CPXV and VACV.
Discussion
Beside the discussion of potentially re-emerging VARV infections, the increasing number of CPXV infections, particularly in young people [13,[30][31][32], reawakened the interest in this zoonotic poxviral disease. As a result of discontinued smallpox vaccination in the younger generations, cowpox as well as monkeypox are regarded as emerging zoonotic hazards, requiring the development of new effective therapies or vaccines [17]. Recombinant subunit vaccines are considered to be safer than conventional live or attenuated vaccines [25]. Their development depends on the identification of cross-reactive OPV antigens that will likewise stimulate the most effective immune response to infectious virus. Many antigenic proteins of VACV have been identified as possible candidates in this context [33,34]. However, genetic diversity between OPVs has not yet been taken into account [25] nor the fact that, beside VARV, monkeypox virus and CPXV are actually important pathogenic poxviruses.
In this study a set of 21 different immunogenic proteins could be identified by serological screening of genomic CPXV and VACV expression libraries (EL) using anti-VACV and anti-CPXV clinical sera. The whole set of these identified proteins was subsequently analyzed to identify cross-reactive OPV antigens.
The method described here allowed the identification of antibodies specific for OPV proteins. Like every method the serological screening of genomic l-based EL has its benefits and limitations. One of the most important benefits is the fact that the construction of such a library does not require any prior knowledge of antibody targets. Therefore it is not necessary to predict ORFs of unknown proteins and to design primers using software, which allows the identification of new antigens. Furthermore, genomic EL ideally contain the entire host genome and express the complete set of virus-encoded proteins. This is especially advantageous for viruses with a large genome like that of the OPVs coding for up to 200 genes. For the libraries presented we have shown that they cover large parts of the viral genome by sequencing randomly picked clones. Therefore this approach allows the investigation of the humoral immune response without prior restriction to a certain protein class. Nevertheless, it is important to note that the fraction of sequences incorporated in a recombinant DNA library depends on the degree of partial DNA digestion [35]. The size of OPV proteins ranges from less than 50 amino acids to more than 1,000 amino acids, implying methodical problems. On the one hand, it is preferable to identify single OPV proteins during the screening process by using inserts of even shorter length. On the other hand, short inserts could result in the translation of incomplete proteins that lack the proper folding for obtaining the native immunogenic structure. To circumvent this and other limitations, EL containing inserts of different size were constructed. Moreover the serological screening of genomic EL is not an absolutely quantitative method but rather a qualitative approach. There is no guarantee that all immunoreactive proteins will be recognized and identified even when specific antibodies are present. However, due to the number of clones screened, several immunogenic proteins should be found. This constriction has been enhanced by pre-diluting the immune sera from 1/200 to 1/1,000. This elicited only the discovery of the most antigenic proteins that could be suitable for inclusion in vaccines.
Antigens expressed in E. coli may not completely parallel those expressed in OPV-infected eukaryotic cells, due to a lack of posttranslational modifications. Additionally, solid-phase immobilization of library-expressed proteins may mask protein epitopes or affect the three-dimensional structure of the antigens. To assess the impact of these factors on the ensuing screening experiments, anti-rA27 antibodies were tested in ELISA assays. The anti-rA27 antibodies were generated in goats and mice immunized with a recombinant protein expressed in E. coli. The specificity of these antibodies was tested in ELISA by coating the recombinant protein as well as VACV particles. The antibodies were shown to recognize both. Thus, the immobilized recombinant A27 protein expressed in E. coli and coated onto plastic was still in a conformation similar to that present on virus particles. The same antibodies were used for the validation of the EL by an immunoscreening which resulted in the identification of the A27L gene. Additionally, the EL were screened with a monoclonal antibody raised against native CPXV particles. This screenings also resulted in immunoreactive plaques. Taken together these results demonstrate that phage-expressed proteins are detected by antibodies generated against native virus particles as well as by those raised against proteins expressed in E. coli.
Although a number of immunogenic OPV proteins has been identified so far, the cross-reactivity of these antigens was not yet adequately taken into account. Our primary goal was, therefore, to develop a method for the identification of proteins that are immunogenic and cross-reactive to different OPVs. For this purpose, VACV genome-containing EL were screened first with anti-VACV sera to demonstrate the suitability of the approach selected. Subsequently, the serological screening of EL containing VACV genome with anti-CPXV sera and vice versa was adopted to identify some of the cross-reactive proteins of VACV and CPXV.
Out of 21 identified immunogenic proteins, 16 were found to be cross-reactive. All 16 cross-reactive proteins could be identified through the screening combination of the CPXV EL with sera from VACV-immunized individuals, including an anti-VACV hyperimmune rabbit serum. Hyperimmunization is a vaccination method by which the same antigen is repeatedly administered, leading to boostered immune responses. On the other hand, the screening of a VACV EL using anti-CPXV clinical serum samples resulted in the identification of only one protein, A4. The different Figure 2. Genome-wide distribution of genes encoding immunogenic proteins. Shown is a generalized OPV genome map with HindIII restriction endonuclease DNA fragment letters (A-P). The icon-selected HindIII fragments encode at least one immunoreactive protein identified through plaque screening of the respective EL. Multiply-selected fragments encode immunoreactive proteins identified in more than one EL. The screenings were performed using sera from VACV -immunized and CPXV-infected humans and animals. doi:10.1371/journal.pone.0021950.g002 First of all, only immunoreactive clones with inserts encoding single proteins are listed in Table 1 and included in Figure 3. Thus, the screening results of EL-VACV-3k-12k which contains longer inserts are not depicted. The screening of EL-CPXV-0.2k-3k resulted in immunoreactive clones with inserts coding for single as well as multiple proteins. On the other hand, all clinical serum samples were obtained after a primary, naturally acquired CPXV infection. Here it is important to note that the serological screening of the EL with hyperimmune sera resulted in the identification of more cross-reactive antigens than a screening with a serum from naturally infected individuals did. These results perfectly correlate with the progression of a humoral immune response. Primary immune responses are often weak due to a limited amount of presented antigen. Therefore, only B cell clones producing high affinity antibodies or those with antigen receptor specificities for the most abundant antigen are selected for proliferation. Interestingly, A4 was identified as the most abundant protein in intracellular mature virions of VACV [3]. Following the second and subsequent boosts to the antigen, the affinity and amount of antibodies increase due to a process called affinity maturation. Hence, more antigenic determinants can be identified using conventional serological techniques. Therefore, clinical sera will usually be able to recognize fewer proteins than hyperimmune sera do. Finally, as mentioned above, the serological screening of genomic EL is not an absolutely quantitative method.
To get a general overview of cross-reactive antigens, it is therefore advisable to screen EL with sera of immunized or even better hyperimmunized individuals. Antibodies present in those sera have higher affinities for their cognate antigen due to somatic hypermutation. This allows an easier identification of the antibody targets using serological screening methods. However, screening of EL using sera obtained after a primary infection can result in the identification of the most antigenic or the most abundant cross-reactive proteins.
Among the identified cross-reactive proteins A4 was the most frequently identified immunogen. A4 (p39) was first identified by Maa and Esteban [36] as a highly antigenic protein eliciting a strong humoral immune response in rabbits and mice. They could also demonstrate the protein to be cross-reactive between VACV and CPXV and therefore supposed that it might have important biological functions [36]. More recently, DNA plasmids encoding the A4L gene were used to prime mice before a boost with VACV, resulting in an improvement of the immune reaction compared to the current vaccination strategy [23].
The immune evasion protein E3 was the second most frequently identified cross-reactive protein. E3 is an important virulence gene responsible for providing interferon resistance. E3 deletion mutants were shown to be less pathogenic in mice models [37][38][39], and thus it was proposed to use them as attenuated vaccines [37].
Among the identified proteins, D13, E3, A3, A4, H6, B2, and E2 were earlier shown to be immunogenic by using a VACV proteome microarray approach [34,40]. Sahin and colleagues showed the proteins A4, A25, E9, F12, B2, and D13 to be antigenic by screening genomic shot-gun EL [41]. Finally, the immunogenicity of D13 and A4 was shown in a microELISA format with proteins derived from a mammalian in vitro expression system [33]. Thus, 10 out of 21 immunogenic proteins identified have been described as OPV antigens before. To our knowledge, the remaining 11 proteins have not been identified before as antibody targets using genome-wide screening approaches. We further compared our set of identified immunogenic proteins with the records of the Immune Epitope Database. This database provides a compilation of experimentally determined B-and T-cell epitopes [42]. For nearly all of the identified immunoreactive proteins, the presence of one or more Immune Epitope Database hits could be confirmed. However, to our knowledge, vaccinia homologues A53 (CPXV191), B22 (CPXV219), and C23 (CPXV003) are not yet known to contain any B-or T-cell epitopes. All of these proteins were also identified as being crossreactive between VACV and CPXV. Thus, we were able to identify three new immunogenic OPV proteins which are crossreactive between CPXV and VACV.
The screening of the EL using sera from different species (rabbits, humans, cats, and rats) resulted in the identification of two common antibody target proteins. The protein B20 was detected by antibodies present in VACV-immunized humans and rabbits. Furthermore, rabbit anti-VACV sera and cat anti-CPXV sera reacted with the protein A4. These commonly recognized antibody targets could be particularly well suited for vaccine design. However, the pooling of screening results for different species infected with different OPV strains could also result in conflicting data. It is therefore important to note that the type of infection and thus the protective immunity mounted against an OPV infection can depend on multiple factors. These include the species of OPV, the route of virus entry, as well as the genus/ species of the host and its immune status [1,43]. Furthermore, the impact of the primary versus secondary immune response to the obtained antigens should be taken into account [43].
An ideal subunit vaccine should protect against different OPV species. This could be achieved by including proteins which are highly conserved among OPV species and are therefore crossprotective. It has been shown that even slight heterogeneity of proteins could result in the loss of cross-protection [44,45]. Out of 16 cross-reactive proteins identified, six (A3, A4, D13, E2, E9, and H6) are perfectly conserved in all members of the subfamily Chordopoxvirinae [11,46] which also includes the genus OPV.
In summary, we have identified 16 cross-reactive proteins of CPXV and VACV by serological screening of genomic EL. Six of these proteins are perfectly conserved among all OPV species. Furthermore, we have identified three unknown immunogenic OPV proteins which are also cross-reactive between CPXV and VACV. Due to their conservation and frequency of detection, we propose that seven of the cross-reactive antigens identified, namely A3, A4, D13, E2, E3, E9, and H6, could be considered for inclusion in subunit vaccines. This could result in protection not only against CPXV but also against VARV and monkeypox virus. To our knowledge, only the protein A4 has so far been tested as a possible subunit vaccine component. The construction and serological screening of further genomic EL with VARV and monkeypox virus genomes could reveal more cross-reactive antigens and speed up the development of safer vaccines.
Materials and Methods
Enzymes DNA restriction endonuclease Bsp143I and T4 DNA Ligase were purchased from Fermentas (St. Leon-Rot, Germany).
Bacterial strains
The E. coli strains XL1-Blue MRF' and XLOLR were purchased from Agilent Technologies, Inc. (Santa Clara, CA, USA). The E. coli strain Rosetta TM was purchased from Novagen Inc. (Darmstadt, Germany).
OPV strains
VACV strain New York City Board of Health (NYCBH; VR-1536 TM ) Laboratories was purchased from American Type Culture Collection (ATCC). CPXV strain GuWi was isolated from a CPXV-infected elephant infected by a rat [47]. A CPXV named calpox virus was isolated from New World monkeys [48,49].
Recombinant A27 protein (rA27)
For the expression of A27, viral DNA was isolated from the calpox virus [48,49]. The A27L gene was PCR amplified in a 50 ml reaction containing 16PCR Buffer (Invitrogen, Darmstadt, Germany), 4 mM MgCl 2 , 100 mM dNTPs, 0.3 mM of each primer (CTgTACTTTCCATggACggAACTCTTTTCC and TTgAgT-CTgCAgATATggTCgCCgTCCAgT), 1 unit Platinum Taq DNA polymerase (Invitrogen), and about 50 ng of template DNA. The cycling was carried out in a MastercyclerH ep gradient (Eppendorf, Hamburg, Germany) under the following conditions: 94uC for 2 min, followed by 30 cycles of 94uC for 20 sec, 63uC for 20 sec, and 72uC for 30 sec, and completed by 72uC for 10 min. The amplicons were purified using Qiaquick PCR purification Kit (Qiagen GmbH, Hilden, Germany) according to manufacturer's instructions. For expression of A27L gene, the purified amplicons were ligated into the pTriEx-3 vector (Novagen Inc.). The amplicons as well as the vector were predigested with the restriction enzymes NcoI and PstI prior to ligation reaction. The ligated DNA was subsequently used to transform competent E. coli strain Rosetta TM cells. The recombinant His-tag protein was finally purified under denaturing conditions using ProtinoHNi-IDA columns (Macherey-Nagel, Düren, Germany).
Sera, monoclonal antibodies, and enzyme conjugates
Vaccinia immune globulin (VIG), which is the immunoglobulin fraction of pooled vaccinia-hyperimmune human serum, was a generous gift from BEI Resources (Manassas, VA, USA). The hyperimmune rabbit anti-VACV (strain Lister) serum was obtained from Acris Antibodies GmbH (Herford, Germany). The clinical serum samples included human, cat, and rat sera collected from CPXV-infected individuals. The human serum was collected 8-9 weeks post infection, the cat serum 3-4 weeks post infection, and the rat serum 2-3 weeks post infection. All clinical material was provided by the German Consultant Laboratory for Poxvirus infections (Robert Koch Institute, Berlin, Germany).
The human anti-dengue virus serum was collected 2-3 weeks post infection. To show the absence of anti-OPV antibodies, both the human anti-dengue virus serum and the human poxvirus-naive serum were tested by immunofluorescence assay, as described elsewhere [49]. Briefly, for the detection of OPV-specific antibodies slides were coated with CPXV GuWi-infected Hep2 cells. For the detection of dengue virus-specific antibodies slides were coated with dengue virus-infected (dengue virus type 1) Vero E6 cells (European Collection of Cell Cultures, ECACC: 85020205). The polyclonal goat anti-rA27 serum was generated by subcutaneously immunizing a goat with approximately 200 mg of the recombinant A27 protein expressed in E. coli. TiterMaxH Gold was used as adjuvant. The goat was boosted once subcutaneously four months later with about 300 mg antigen and the serum was collected two weeks thereafter.
The monoclonal anti-CPXV 3D11 antibody [29] was kindly provided by Claus Peter Czerny. The monoclonal mouse anti-rA27 A1/6-15 was generated using the hybridoma technology by immunizing C57BL/6 mice with the recombinant A27 protein expressed in E. coli.
ELISA assays
For ELISA testing of anti-rA27 antibodies 200 ng of rA27 or 5610 6 UV-inactivated VACV particles (strain NYCBH) were coated over night in 100 ml/well (0.1 M NaHCO 3 pH 9.6) of a 96-well MaxiSorp ELISA plate (Nunc, Langenselbold, Germany). The wells were then blocked with Tris-buffered saline containing 0.05% Tween 20 (TBS-T) and 3% BSA for 1 hr at room temperature. After four washing steps (300 ml/well TBS-T) the diluted anti-rA27 antibodies (in TBS-T with 0.25% BSA) were added at 100 ml/well and incubated for 1 hr at room temperature. Subsequently, the wells were washed again four times and the diluted HRP-conjugated goat anti-mouse or donkey anti-goat antibodies were added (100 ml/well, 1:5,000 diluted in TBS-T with 0.25% BSA) and incubated for 1 hr at room temperature. After four further washes, bound HRP was detected using 3,39,5,59-Tetramethylbenzidine (TMB) substrate tablets (Sigma, St. Louis, MO, USA) and assayed at 450 nm with a reference measurement at 620 nm (InfiniteH200 PRO microplate reader, Tecan Group Ltd., Mä nnedorf, Switzerland).
Sera pre-treatment
Sera generally contain antibodies to E. coli proteins, which can cause a high background signal. To reduce this background signal, sera were pre-incubated with E. coli (strain XL1-Blue MRF9) lysate. The lysate was prepared by growing an E. coli culture to saturation and harvesting the cells by centrifugation (15 min, 1,5006g). The cells were then resuspended in a buffer containing 50 mM Tris-HCl (pH 8.0) and 10 mM EDTA and were subsequently broken by three successive freeze-thaw cycles, followed by 361 min sonication steps (Sonifier S-450D, Branson, Danbury, CT, USA). The cell debris was then removed by another centrifugation step (15 min, 1,5006g). The resulting lysate (supernatant) was used for incubation with the diluted serum for 30 min at room temperature.
Ethics statement
The clinical cat and rat sera used to screen the genomic expression libraries were provided by the German Consultant Laboratory for Poxvirus infections (Robert Koch Institute, Berlin, Germany). No specific ethical approval was required, since all sera were diagnostic material and no animal experiments using laboratory animals were performed.
Preparation of genomic DNA for cloning
Purification of genomic DNA. OPVs were grown in 175 cm 2 culture flasks containing a monolayer of Hep2 cells (ATCC) for three days (VACV) or five days (CPXV), respectively. Approximately 2.7610 7 cells were infected with a virus input multiplicity of infection (MOI) of 0.25 (VACV) and 0.1 (CPXV), respectively. The preparation of poxvirus DNA from the cytoplasm of infected Hep2 cells was performed as described in [50]. The concentration of the prepared DNA was determined using NanoDrop 1000 (Thermo Scientific, Wilmington, DE, USA). The integrity of the genomic DNA was estimated on a 0.5% agarose gel prepared in 16TAE buffer with 1 mg/ml ethidium bromide.
Real-time PCR. The ratio between poxvirus and human DNA was estimated by real-time PCR using two different assays. For the detection of OPV DNA the OPV assay was used. The cellular DNA was detected by using the c-myc assay which detects the housekeeping gene c-myc. The real-time PCR was performed as described elsewhere [51]. As quantitative calibration standards plasmids with the respective target sequence were measured in each run. For plasmid construction the corresponding sequences were amplified and cloned into a TOPO TA vector (Invitrogen, Karlsruhe, Germany) as described previously [52]. The cycling was carried out in an Mx3000P QPCR system (Agilent Technologies, Inc.) under the following conditions: 95uC for 10 min, followed by 40 cycles of 95uC for 15 sec and 60uC for 30 sec.
Partial digestion of genomic DNA. Partial digestion of genomic poxvirus DNA was performed using the DNA restriction endonuclease Bsp143I (Fermentas). The amount of Bsp143I applied depended on the DNA insert size desired for cloning. For a fragment size of 3 kb to 12 kb a BspI concentration of 0.01 U/mg to digest about 60 mg of genomic poxvirus DNA was convenient. A fragment size of 0.2 kb to 3 kb could be achieved by digesting about 50 mg DNA with 0.15 U/mg of Bsp143I. For a fragment size of 0.2 kb to 0.7 kb, 0.5 U/mg of Bsp143I for about 50 mg DNA was convenient.
Size fractionation. The partially digested poxvirus DNA was fractionated on agarose gels at 50 V for 2-3 hr. The agarose concentration depended on the fragment size to be fractionated: 0.5% for 3 kb to 12 kb fragments, 0.8% for 0.2 kb to 3 kb fragments, and 1.5% for 0.2 kb to 0.7 kb. The fractionated DNA was visualized under long-wave UV light, and the desired fragment range was excised using a scalpel. The excised gel slices were weighed and the DNA extracted using Nucleospin Extract II Kit (MachereyNagel) according to the manufacturer's instructions. The extracted DNA was diluted in 15 ml elution buffer and the DNA concentration determined using NanoDrop 1000.
Construction of OPV expression libraries
Starting from ligation reaction, all genomic ELs were constructed using the ''ZAP ExpressH Predigested GigapackH III Gold Cloning Kit'' (purchased from Stratagene, La Jolla, CA, USA, now Agilent Technologies, Inc.) with the BamHI/CIAPtreated vector (former catalog #239615, currently not available) according to manufacturer's instructions. Methods for which modifications to the original protocol or amount specifications of reagents were necessary are described below.
The partial digestion of DNA with Bsp143I enzyme results in DNA ends compatible with those produced by the restriction enzyme BamHI. Thus, the partially digested genomic OPV DNA was directly ligated into the BamHI-predigested ZAP Express vector without modifying the DNA ends. The following DNA amounts were ligated for each EL: 336 ng for EL-VACV-3k-12k, 185 ng for EL-VACV-0.2k-0.7k, 80 ng for EL-CPXV-0.2k-0.7k, and 36 ng for EL-CPXV-0.2k-3k in a final volume of 7 ml. DNA from ligation reactions was directly used for the packaging reactions. The following volumes of ligation reaction were used for packaging: 3.5 ml for EL-VACV-3k-12k, 4 ml for EL-VACV-0.2k-0.7k and EL-CPXV-0.2k-0.7k, and 7 ml for EL-CPXV-0.2k-3k.
Validation of OPV expression libraries
For an assessment of their quality all four EL constructed were extensively validated. Hereby, the representativeness of the EL was assessed by randomly picking 25-60 recombinant plaques from every library and by sequencing the DNA inserts. The obtained sequences were aligned to a reference poxvirus genome to estimate the distribution of the sequences. Additionally, all four EL were serologically screened using monoclonal (anti-rA27 A1/6-15 anti-CPXV 3D11) and polyclonal (goat anti-rA27) antibodies.
Plaque immunoscreening of OPV expression libraries
All four constructed OPV EL were screened with selected polyclonal and monoclonal antibodies. The plaque-screening procedure was performed as described in the picoBlue TM Immunoscreening Kit (Agilent Technologies, Inc.) with minor changes. Briefly, 150 mm dishes with phage plaques growing for 4 hr at 42uC on a lawn of E. coli XL-1 Blue MRF9 cells in 0.7% soft agar were overlaid with nitrocellulose filter disks (PALL Gelman Laboratory, Ann Arbor, MI, USA) presoaked in 10 mM isopropyl-b-thiogalactoside (IPTG). After incubation for 4 hr at 37uC, the filters were removed, washed in TBS-T at room temperature, and blocked in Tris-buffered saline (TBS) containing 1% bovine serum albumin. Filters were then incubated in succession with monoclonal antibodies/sera and HRP-conjugated secondary antibodies, with 3-5 TBS-T washes between every incubation step. The monoclonal antibodies were diluted to 0.5 mg/ml in TBS with 1% BSA. The sera VIG, rabbit anti-VACV, cat anti-CPXV, and goat anti-rA27 were diluted 1:1,000. Rat anti-CPXV serum was diluted 1:200. The binding of HRPconjugated antibodies was detected through the color development reaction with the substrate 3-Amino-9-ethylcarbazole (AEC, Sigma). Immunopositive plaques resulted in small reddish dots on the membrane. This membrane was then aligned with the original agar plate, allowing identification of corresponding plaques. After staining, positive plaques were picked, suspended in buffer, and plaque-purified once on 90 mm dishes. For this, phage solutions were diluted (10 21 -10 23 ), plated, and incubated over night at 37uC. The plates were again overlaid with nitrocellulose membranes (PALL Gelman Laboratory), incubated for 4 hr at 37uC, and stained. Positive plaques were picked again (without adding chloroform to buffer), and the contained pBK-CMV phagemid vector was excised in vivo as described in the ''ZAP ExpressH Predigested GigapackH III Gold Cloning Kit'' (Agilent Technologies, Inc.) instruction manual. From the bacterial colonies appearing on the agar plates the excised pBK-CMV double-stranded phagemid vectors were isolated using NucleoS-pinH Plasmid QuickPure Kit (Macherey-Nagel) according to the manufacturer's instructions and the DNA amount quantified using the NanoDrop1000.
PCR and DNA sequencing
The isolated pBK-CMV phagemid vectors were used as templates for PCR. For the amplification of long inserts ($3 kb) the ''Expand Long Range, dNTPack'' (Roche Diagnostics GmbH, Mannheim, Germany) was used according to the manufacturer's instructions as a 25 ml reaction containing 3% DMSO. The amplification of shorter fragments was performed as a 25 ml reaction containing 16PCR Buffer (Invitrogen), 4 mM MgCl 2 , 100 mM dNTPs, 0.3 mM of each primer (T7 primer: gTAATAC-gACTCACTATAgggCg and T3 primer: ATTAACCCTCAC-TAAAgggA), 1 unit of Platinum Taq DNA polymerase (Invitrogen), and about 50 ng of template DNA. The cycling was carried out in a Mastercycler ep gradient (Eppendorf) under the following conditions: 95uC for 5 min, followed by 40 cycles of 95uC for 30 s, 57uC for 30 s, and 72uC for 60 s, completed by 72uC for 5 min, and cooling down to 4uC until further processing. The amplified DNA inserts were cycle sequenced using the flanking primers T3 and T7 and the BigDyeH Terminators chemistry (Applied Biosystems, Weiterstadt, Germany).
Data processing
The obtained sequences were aligned to VACV strain Western Reserve (WR) genome (GenBank acc. no. AY243312.1) for VACV genome-containing EL and to CPXV strain Brighton Red (GenBank acc. no. AF482758) for CPXV libraries. The alignment was performed using the BLASTN program by optimizing for highly similar sequences (megablast). Figure S1 Validation of constructed genomic OPV expression libraries. Recombinant plaques were picked, the insert DNA sequenced and the obtained sequence aligned to a reference genome. Shown is the distribution of DNA-inserts obtained from recombinant clones from (A) EL-VACV-3k-12k displayed on a VACV genome (GenBank AY243312.1), and (B) EL-CPXV-0.2k-0.7k displayed on a CPXV genome (GenBank AF482758.2). The graphics were created using the nucleotide database on the NCBI website (http://www.ncbi.nlm.nih.gov/ nuccore). By choosing the appropriate reference genome and the display setting ''graphics'', the DNA inserts from recombinant clones could be defined as markers. Each DNA insert obtained from a recombinant clone is represented by a dot and the respective color-shaded bar. These colored bars indicate the size of the DNA insert and the covered genes. The pairs of horizontal green and red bars represent annotated genes on the dsDNA poxvirus genome. (TIF) Figure S2 Screening of an expression library with an anti-CPXV antibody. For a validation of their complexity the constructed genomic EL were serologically screened with the monoclonal antibody 3D11 that was raised against native CPXV particles. Immunopositive signals are exemplary indicated through white arrows on a stained nitrocellulose filter obtained through screening of EL-VACV-3k-12k. (TIF) Figure S3 Validation of anti-rA27 antibody reactivity in ELISA. Polyclonal goat anti-rA27 and monoclonal mouse anti-rA27 were generated by immunizing with a recombinant A27 protein expressed in E. coli. The target reactivities of these antibodies were tested in an ELISA by coating the recombinant antigen and whole VACV particles onto ELISA plates and incubating with serially diluted antibodies: (A) Goat anti-rA27 serum, (B) Mouse anti-rA27 monoclonal antibody. (TIF) | 2014-10-01T00:00:00.000Z | 2011-07-14T00:00:00.000 | {
"year": 2011,
"sha1": "85a5158cc12a347fd19d34fd354e243f80fb57a3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021950&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85a5158cc12a347fd19d34fd354e243f80fb57a3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
215786213 | pes2o/s2orc | v3-fos-license | On the computability of continuous maximum entropy distributions with applications
We initiate a study of the following problem: Given a continuous domain $\Omega$ along with its convex hull $\mathcal{K}$, a point $A \in \mathcal{K}$ and a prior measure $\mu$ on $\Omega$, find the probability density over $\Omega$ whose marginal is $A$ and that minimizes the KL-divergence to $\mu$. This framework gives rise to several extremal distributions that arise in mathematics, quantum mechanics, statistics, and theoretical computer science. Our technical contributions include a polynomial bound on the norm of the optimizer of the dual problem that holds in a very general setting and relies on a"balance"property of the measure $\mu$ on $\Omega$, and exact algorithms for evaluating the dual and its gradient for several interesting settings of $\Omega$ and $\mu$. Together, along with the ellipsoid method, these results imply polynomial-time algorithms to compute such KL-divergence minimizing distributions in several cases. Applications of our results include: 1) an optimization characterization of the Goemans-Williamson measure that is used to round a positive semidefinite matrix to a vector, 2) the computability of the entropic barrier for polytopes studied by Bubeck and Eldan, and 3) a polynomial-time algorithm to compute the barycentric quantum entropy of a density matrix that was proposed as an alternative to von Neumann entropy in the 1970s: this corresponds to the case when $\Omega$ is the set of rank one projections matrices and $\mu$ corresponds to the Haar measure on the unit sphere. Our techniques generalize to the setting of Hermitian rank $k$ projections using the Harish-Chandra-Itzykson-Zuber formula, and are applicable even beyond, to adjoint orbits of compact Lie groups.
Introduction
Entropy maximizing distributions. Let Ω be a subset of R d and let K = hull(Ω) denote the convex hull of Ω. Suppose one is given an A ∈ K. A natural question arises: Is there a canonical way to choose a probability measure supported on Ω that can be used to express A as a convex combination of points on Ω? When Ω is a discrete and finite set, this problem has been extensively studied and a canonical probability distribution was proposed by Jaynes [25,26]: among all probability distributions that can be used to express A as a convex combination of points in Ω, pick the one that maximizes the Shannon entropy. These distributions are referred to as maximum entropy (max-entropy) distributions and arise in machine learning, statistics, mathematics, and theoretical computer science (TCS). In TCS, these distributions have found many uses due to duality, connections to polynomials, and algorithms to compute them [20,36,2,14,11,1]; see [38].
In this paper we initiate a study of the computability when Ω is a continuous (and often nonconvex) manifold. Examples of interest include the set of rank k Hermitian projection matrices (related to the Grassmanian), or a convex body (in which case K = Ω).
Unlike the discrete setting, in the continuous setting the notion of finding a max-entropy distribution is not well-defined since a canonical notion of entropy does not necessarily exist. We instead consider relative entropy, Kullback-Leibler (KL) divergence with respect to a prior measure µ on Ω that corresponds to the density function f (X) ≡ 1 for all X ∈ Ω. For all of the manifolds mentioned above, there is a canonical measure that has this property and is called the uniform measure; see Section 2. This leads us to the following infinite dimensional convex optimization problem which gives a canonical way to write A as a convex combination of points in Ω: Find a measure ν on Ω that is continuous with respect to µ and, subject to the constraint that the expected point in K with respect to ν is A, ν minimizes the KL divergence to µ. Note that, by choice, ν is as close to the distribution µ as possible; hence we call it a maximum entropy distribution.
The class of extremal entropy maximizing distributions that arise in this manner have several properties that have led to their appearance, implicitly or explicitly, in several different areas: • the work of Klartag (inspired by a work of Gromov) on the isotropic constant [28,16], • the work of Khatri and Mardia on the Matrix Bingham distribution in statistics with applications to various scientific and engineering problems [6,27,22], • as shown here, the work of Goemans and Williamson on rounding semidefinite programs [15], • the works of Güler, Bubeck and Eldan on barrier functions for interior point methods [18,19,7], • the works of Band, Park, and Slater that defined the barycentric quantum entropy and proposed it as an alternative to the von Neumann entropy in the 1970s [3,32,37].
rounding. Here, typically, A is a positive semi-definite (PSD) matrix, that is computed using a SDP relaxation to some non-convex problem, and one of the goals is to round A to a vector. This involves choosing a distribution on the set V 1 defined above, and typical choices have been somewhat magical and lack an explanation. In the Goemans-Williamson setting, A is an n × n PSD matrix, and the density ν on Ω they choose to express A as a convex combination is as follows: pick a vector v ∈ R n from the normal distribution with covariance matrix A. We show that this distribution is the maximum entropy distribution ν ⋆ (corresponding to A) on V 1 with base measure induced by the Lebesgue measure on R n , thus giving an optimization characterization of this measure; see Corollary 4.12. The proof relies on strong duality and a closed form expression for the dual objective integral on V 1 ; see Theorem 4.1.
Quantum entropy. In quantum mechanics, a density matrix ρ is a trace one complex n × n PSD matrix and describes the statistical state of a system. The extreme points in the set of density matrices are the pure states or P 1 . von Neumann defined a notion of entropy [40] of ρ that is computed by first writing ρ as a convex combination n i=1 λ i u i u * i , where {u i } i∈ [n] is an orthonormal basis for C n , and then computing the negative Shannon entropy of the λ i 's. While the von Neumann entropy is a mathematically elegant notion, it was vigorously argued in the 1970s that it does not capture the uncertainty in ρ [3,32,37]. In fact, von Neumann's way to write ρ as a convex combination of pure states can be viewed as "the most terse", or entropy minimizing one. In the same papers, an alternative way to define entropy of a density matrix was suggested -as the entropy of the entropy maximizing distribution with marginal ρ -and referred to as the barycentric quantum entropy. Unlike the von Neumann entropy, that has a simple formula (− Tr ρ log ρ)), the barycentric entropy did not have an efficient algorithm that could compute it. Our algorithm to compute entropy maximizing distributions for P 1 mentioned above directly implies a polynomial time algorithm to compute the barycentric entropy of a density matrix (that is sufficiently in the interior) along with the probability density that achieves it; see Corollary 4.11.
Entropic barrier function. Bubeck and Eldan in [7] proved that the entropic barrier of a convex body K ⊆ R d is a (1+o(1))n-self-concordant barrier on K. Roughly speaking, this barrier function, for a point in K is defined to be the optimal value of a dual maximum entropy optimization problem when Ω = K and the measure is the Lebesgue measure on K. The computability of this barrier function for a point K is not known in general. One obstacle is to get a reasonable bound on the norm of the optimal dual solution. An almost direct consequence of Theorem 4.2 implies such a bound for points that are sufficiently in the interior of K; see Corollary 6.6.
Preliminaries
Notation. Let C, R, R + , N denote the complex, real, nonnegative real, and natural numbers respectively. For k, n ∈ N, let C k×n and R k×n denote the sets of k × n complex and real matrices respectively. A matrix M ∈ C n×n is said to be Hermitian if A = A * where * denotes the conjugate transpose. A Hermitian matrix M is said to be PD (positive definite) and PSD (positive semidefinite) if its eigenvalues are positive and nonnegative respectively. For an n × n matrix X, we define diag(X) to be the length-n vector of the diagonal entries of X. If x is a vector, then we define diag(x) to be the diagonal matrix with entries the entries of x. For any k, n ∈ N, we equip the vector space C k×n with the Frobenius inner product Y, Z := Tr(Y Z * ). We also denote Y := Y, Y . Note that Y, Z ∈ R whenever Y, Z are Hermitian, so that the set of n × n Hermitian matrices is a real Hilbert space of dimension n 2 . Also Y, Z ≥ 0 whenever Y, Z are PSD. We further let B ε (Y ) denote the open ε-ball centered at Y in the space in which Y lives (e.g., the n × n Hermitian matrices). Finally, we let hull(S) denote the convex hull of a set S in some ambient vector space.
Manifolds. In general, we let Ω be any smooth manifold that is embedded in a d-dimensional real Hilbert space V with inner product ·, · . Let L(X) = B denote the affine space in which hull(Ω) is full dimensional, i.e., every element X ∈ hull(Ω) satisfies the equation L(X) = B. The concrete manifolds we consider are collections of matrices with some structure. In particular, for fixed n ∈ N, consider the following manifold within C n×n . An n × n rank-k PSD projection is a PSD matrix with k eigenvalues equal to 1 and the rest equal to 0. P k = P k (n) := {n × n rank-k PSD projections}.
Note that P k is also a manifold within the space of n × n Hermitian matrices. 1 Other manifolds we consider are the complex unit sphere S n C ⊂ C n (which is related to P 1 ), the manifold of all rank one matrices (not necessarily trace one): V 1 := {vv ⊤ : v ∈ R n }, and a convex body K ⊂ R n .
We would also like to consider the convex hull of a given manifold Ω. To make sense of such a notion, we need to consider the manifold as being embedded in some ambient vector space. This ambient space often the space of n × n Hermitian matrices in our examples. In general, we refer to the elements of hull(Ω) as marginals or marginals matrices.
Group actions. It is useful to understand the symmetries of some of the manifolds mentioned above in terms of groups that act on them. Recall that an n × n unitary matrix is an invertible matrix U for which U −1 = U * , and an n × n orthogonal matrix is an invertible matrix O for which O −1 = O ⊤ . The unitary and orthogonal groups (U (n) and O(n)) act on the manifolds discussed above as follows: • U (n) acts on column vectors in S n C and on hull(S n C ) by left multiplication. • U (n) acts on P k and on hull(P k ) by conjugation.
• O(n) acts on V 1 and on hull(V 1 ) by conjugation.
Note that the actions of U (n) on S n C and on P 1 are compatible in the sense that for x ∈ S n C and U ∈ U (n), we have (U x)(U x) * = U (xx * )U * where xx * ∈ P 1 .
Relative interior. The convex set hull(Ω) is not necessarily full dimensional in the ambient Hilbert space. To define a notion of interior for hull(Ω), we restrict to the minimal affine subspace in which Ω lives (this is given explicitly by L(X) = B discussed above). More generally, we make the following definition.
Definition 2.1 (Relative interior) Fix a convex subset S in a vector space V , and let
Here we usually consider S = P k (n), and we will be interested in the case where η ≥ 1 poly(n) .
Measures and densities.
Often, the manifolds Ω we consider have some geometric structure (e.g., it is a manifold with a group action), and we want to consider measures which interact nicely with this structure. To make sure this happens, we restrict to the class of measures which are given by continuous density functions on Ω. To make sense of this, we need a natural base measure µ on Ω which corresponds to the density function f (X) ≡ 1. (E.g., in the case of Ω = C n or Ω = R n , the Lebesgue measure often plays this role.) In particular, the support of µ should be equal to Ω. In the case of Ω = P k , there is a canonical measure which is appropriately called the uniform measure: we define µ k be the unique unitarily invariant measure on P k , where U (n) acts by conjugation (as discussed above). Hence, equivalently (and more formally), we restrict to the class of measures on P k which are absolutely continuous with respect to µ k . We prove here the existence of µ k , a classical result. Proposition 2.1 (Existence of µ k ) There exists a distribution µ k on P k (which we call the uniform distribution). If X is a random variable distributed according to µ k , then X and U XU * have the same distribution for any unitary U .
Proof:
Pick random complex unit vectors to obtain a collection of k orthogonal vectors in C n . Form an k × n matrix P by letting the v i be the rows of P . Defining X := P * P ∈ P k gives a distribution µ k on P k .
For unitary invariance, note that this property holds for the choice of v 1 by construction. This can then be inductively applied to v 2 , . . . , v k by composing the given unitary with the appropriate projection.
We also consider the standard Lebesgue measure on R n for convex bodies and its pushforward measure µ through the map v → vv ⊤ on V 1 . Note that S n C also has a canonical unitarily invariant measure, usually called the Haar measure. The pushforward of this measure through the map v → vv * yields the unitarily invariant measure µ 1 on P 1 .
Integration/Counting oracle. We are interested in computing the following exponential integral for a given Y in our Hilbert space V .
Definition 2.2 (Exponential integrals)
Fix n ∈ N and let µ be a measure with support Ω, a manifold embedded in the real Hilbert space V . We define the following function on an input Y ∈ V : Whenever µ = µ k and Ω = P k , we use the following shorthand notation E k (Y ). We sometimes also refer to these integrals as exponential integrals.
A strong integration/counting oracle for Ω and µ outputs two quantities, given an element Y from the ambient Hilbert space V of Ω: 2. the matrix ∇E µ (Y ), defined so that the following holds for any Z ∈ V : subject to: subject to: In the case of Ω = P k , Y and Z are Hermitian. Further, since the measure µ k is unitarily invariant, we can assume that Y is diagonal and expect the running time of the counting oracle should depend polynomially on n and the number of bits needed to represent e −y i for any i, where y 1 , . . . , y n are the eigenvalues (diagonal elements) of Y . As we will show, in the special case when Ω = V 1 and µ is the pushforward of the Lebesgue measure, we can compute the integral E µ (Y ) exactly in time polynomial in the bit complexity of Y due to a direct formula. This happens because the measure µ is a product measure, which is not the case for µ k .
The maximum entropy framework
In this section we present our maximum entropy convex program. Fix a manifold Ω in a ddimensional real Hilbert space with inner product ·, · , and let L(X) = B denote the corresponding affine space containing Ω. Let µ be the base measure on Ω and A in K := hull(Ω). Our goal is to find a density function ν with marginal A that minimizes the KL-divergence with respect to µ.
We use the shorthand Prim µ (A) (or Prim k (A) if µ = µ k ) to refer to this primal optimization program. We mainly consider the case of µ = µ k and Ω = P k or Ω = V 1 with µ the pushforward of Lebesgue measure. In these cases Y will comes from some subspace of the n×n Hermitian matrices. Drawing from the intuition that these base measures are uniform over the manifold, and hence in some sense maximize entropy, we say the KL-divergence minimizing measure is entropy maximizing. However, we note that this framework is also applicable to other base measures, in particular to the case when Ω is a convex body in R d and µ is the Lebesgue measure. The fact that the entropy integral (without the minus sign) is convex as a function of the density ν follows from the fact that this integral is precisely the KL divergence between the probability distribution corresponding to ν and the distribution µ. Convexity of the KL divergence for probability distributions is then a well-known fact.
Efficiently solving this convex program directly is a priori impossible as the support of ν is infinite. To find a succinct representation for the optimal ν ⋆ , we turn to the dual program (see Section A.1 for a derivation), which gives us a nice representation of the max-entropy density function ν ⋆ . We often use the shorthand Dual µ (A) (or Dual k (A) if µ = µ k ) to refer to this program.
In the case of P k with uniform measure µ k , the optimal solution to Dual k (A) is given by a Hermitian matrix Y ⋆ . By strong duality (see Theorem 4.1), this in turn shows that the maxentropy density function ν ⋆ takes on a nice form: As a note, in the case of Ω = P k this matrix Y ⋆ is only unique up to a shift by a multiple of the identity matrix. Issues arising from non-uniqueness can be handled by restricting to the minimal affine subspace in which hull(P k ) lives, as referred to in the discussion surrounding Definition 2.1. However, as A tends to the boundary of hull(Ω), Y ⋆ can be seen to tend to infinity as the support of the measure ν ⋆ tends to lower dimensions.
Mathematical and computational results
Our first result shows that strong duality holds.
Theorem 4.1 (Strong duality)
Let Ω be a manifold that is embedded in a d-dimensional real Hilbert space with an inner product ·, · , and let µ be a measure supported on Ω. For any A in the relative interior of the convex hull of Ω, the optimal values of the primal and dual objective functions coincide, and the corresponding max-entropy distribution has density function of the following form for some Y ⋆ : The proof of this result uses standard techniques and appears in the appendix (Sections A.2 and A.3). This result applied to P k and µ k shows that optimizing Dual k (A) is in fact equivalent to optimizing Prim k (A), and therefore the max-entropy measure has the exponential form described above.
With strong duality in hand, we focus on the computability of the optimal matrix Y ⋆ for the dual program. To do this we use a version of the ellipsoid algorithm (see Theorem 8.1 and the algorithm that follows), for which we need two things.
First, we need an upper bound on some norm of the dual optimal solution. If Y ⋆ is the optimal solution, then the number of iterations of the ellipsoid algorithm depends on log Y ⋆ . That said, it may seem that a bound depending on e 1/η , where η is such that B η (A) ⊂ hull(Ω), is enough to achieve polynomial dependence on 1 η . However, this is not enough, since the integral appearing in the dual is polynomially dependent on the number of bits needs to represent e −y i , where the y i 's are the entries or eigenvalues of a given input Y . Hence, we actually need polynomial dependence on 1 η , which is achieved in our bounding box result below. Note that this issue is not surprising, as it crops up in exactly the same way in the discrete maximum entropy case (see [36]).
We give here a bounding box result which is more general than we need for the rank-k projections case (Ω = P k and µ = µ k ). It relies on a key "balance" property of the measures. This notion extends important properties of the discrete uniform measure to continuous measures on manifolds and is one of the key notions we introduce. We see in Definition 6.2 how this notion can be used to give a more refined notion of interior (beyond the η parameter discussed above). Conceptually, it allows us to give an measure-theoretic relaxation of the notion of a separating hyperplane.
Theorem 4.2 (Bounding box)
Let µ be a measure supported on a manifold Ω embedded in a ddimensional real Hilbert space. Suppose that µ is balanced, in the sense of Definition 4.1. Further, let A be an element of the η-interior of the convex hull of Ω. Then there is an optimal solution Y ⋆ to the dual program such that: Corollary 6.4 and Corollary 6.6 give bounds for rank-k projections and convex bodies as corollaries.
Remark 4.3
Our bounding box result significantly generalizes the discrete case (Theorem 2.7 in [36]). Uniform distribution in the discrete case has atoms of uniformly strictly positive (at worst singly-exponentially small) mass at all points, and this implies a bound on optimal dual solutions.
In the continuous case this is no longer true, the notion of balance then fills the gap.
Second, at each step of the ellipsoid algorithm, we need to be able to evaluate the dual objective function and its gradient at given input Y . The hardest part of such a computation comes in evaluating E µ , the exponential integral portion of the objective function. We show that if we have access to such an evaluation oracle, then under very general conditions, we can compute the maximum entropy distribution.
Theorem 4.4 (Ellipsoid method-based general algorithm)
Let µ be a balanced measure with support on a manifold Ω embedded in a d-dimensional real Hilbert space. Let the affine space in which Ω lies, L(X) = B, be given as input (L, B). Assume that Ω is contained in a ball of radius r. There exists an algorithm that, given A in the η-interior of hull(Ω), any ε > 0, and a strong counting/integration oracle for the exponential integral E µ (Y ), returns Y • such that where F A is the objective function for the dual program Dual µ (A), and Y ⋆ is an optimum of the dual program. The running time of the algorithm is polynomial in d, η −1 , log(ε −1 ), log(r), and the number of bits needed to represent A, L, and B.
Our next result says that in fact we have an efficient strong counting oracle for E k on the domain P k with measure µ k .
Theorem 4.5 (Counting oracle)
There is an algorithm that, given n ∈ N, k ∈ [n], an n × n real diagonal matrix Y = diag(y), and a δ > 0, returns numbersĒ,Ḡ such that where E k is the exponential integral defined above (and in Definition 2.2). The running time of the algorithm is polynomial in n, log( 1 δ ), and the number of bits needed to represent e −y i for any i ∈ [n].
The proof of this theorem for k = 1 is elementary but relies on the interesting connection between the complex unit sphere and the probability simplex. This connection also yields an exact sampling algorithm; see Proposition 7.10. For k > 1, the proof of the theorem above relies on the Harish-Chandra-Itzykson-Zuber formula [21], [24]; see Theorem 7.7.
Remark 4.6
In the case of V 1 with the pushforward of Lebesgue measure, there is an exact formula to compute the corresponding dual optimum for positive definite marginals A: Y ⋆ = 1 2 A −1 ; see Corollary 9.4. Positive-definiteness of the input Y is in fact required for the dual objective to be finite, which is in stark contrast with the P k case where any Hermitian matrix is allowed. These points suggest a conceptual divide between the Lebesgue measure case and the rank-k projections case. We do not expect such a formula for Y ⋆ in the case of P 1 and, indeed, the lack of one has been one of the obstacles for efficient algorithms for quantum barycentric entropy and computing the normalizing constant of the matrix Bingham distribution.
Remark 4.7
In this paper we primarily consider the best possible setting where the running time of the counting oracle depends logarithmically on the accuracy. We refer to such counting oracles as exact. We note that our framework does allow for counting oracles where the dependence is polynomially in 1/δ. [17] studies the characteristic function of a convex cone. In our language, the characteristic function of a cone is the exponential integral E K (y) with respect to the Lebesgue measure on the dual cone K:
Remark 4.8 Guler in
Such an explicit formula gives a route to efficiently computing the dual objective function in this case.
The bounding box and counting oracle for µ k and P k then imply that the ellipsoid method-based algorithm from Theorem 4.4 gives a polynomial time algorithm for approximately computing Y ⋆ , the optimum of the program Dual k (A).
Corollary 4.9 (Ellipsoid method-based efficient algorithm for P k )
There exists an algorithm that, given n ∈ N, k ∈ [n], a trace-k PD matrix A in the η-interior of the convex hull of the set of n × n rank-k PSD projection matrices (i.e., hull(P k )), and an ε > 0, returns a Hermitian where F A is the dual objective function and Y ⋆ is an optimal solution to the dual program Dual k (A).
The running time of the algorithm is polynomial in n, 1 η , log( 1 ε ), and the number of bits needed to represent A.
We further discuss the closeness of the distributions associated to Y • and Y ⋆ from the previous Corollary in Appendix C.
Remark 4.10
Notice that the dependence on 1 η means that we do not achieve a polynomial time algorithm for A near the boundary of hull(P k ). This dependence comes from the fact that the bounding box (Theorem 4.2) is dependent on 1 η . One may then naturally ask whether this bounding box dependence can be improved. It turns out that it cannot in this case, see Remark 6.5. Note that this differs from the discrete case, where in [38] the authors are able to remove this 1 η dependence under certain assumptions on the polytope.
Applications
Barycentric quantum entropy. In [37], Slater discusses the notion of barycentric quantum entropy of a density matrix, and compares it to that of von Neumann entropy. His investigation of this notion was prompted by the work of Band and Park [3,32], who critiqued the use of von Neumann entropy as a good indicator of the uncertainty of the given density matrix. In particular, they argue that a better notion of entropy would relate to distributions on all possible pure states, whereas the von Neumann entropy is derived from the discrete distribution on the pure states corresponding to eigenvectors of the matrix. In response to this, Slater defines a notion of quantum entropy in terms of a max-entropy program on the set of all pure states. He then goes on to show how one might determine the quantum entropy in a few specific cases.
Definition 4.2 (Barycentric quantum entropy)
Let ρ be an n × n Hermitian density matrix (trace-1, positive semidefinite). Then the barycentric quantum entropy of ρ is defined (in our notation) as: where P 1 denotes the set of pure states and µ 1 denotes the unitarily invariant measure on P 1 .
Our results for computing max-entropy measures on P 1 immediately imply efficient computability of the barycentric quantum entropy for density matrices that are polynomially in the interior.
Corollary 4.11 (Computability of barycentric quantum entropy)
There exists an algorithm that, given a Hermitian density matrix ρ in the η-interior of the set of Hermitian density matrices and an ε > 0, returns a numberH such that |H − H b (ρ)| < ε. The running time of the algorithm is polynomial in n, 1 η , log( 1 ε ), and the number of bits needed to represent ρ.
Goemans-Williamson SDP rounding.
In their seminal paper, Goemans-Williamson [15] gave a rounding scheme that gives a way to round a given PD matrix A to a vector. Their method goes by drawing a vector v from a particular distribution on R n based on the matrix A.
Definition 4.3 (Goemans-Williamson measure)
Given n ∈ N and a real positive definite n×n matrix A, the Goemans-Williamson measure µ GW can be defined via a sampling process on R n as follows.
1. Sample g ∈ R n from the standard multivariate Gaussian distribution.
Compute
It is then straightforward to compute the marginals matrix associated to this distribution as follows: Thus, if we map R n to V 1 via v → vv ⊤ and also pushforward the Lebesgue measure through this map, the above is precisely the marginal constraint in our max-entropy framework. This observation implies that the pushforward of the measure µ GW is a (strictly) feasible solution to the max-entropy primal program on the domain V 1 with the pushforward of the Lebesgue measure. We show that it is also the optimal solution to the max-entropy program. Entropic barrier function. Bubeck and Eldan in [7] prove that the entropic barrier of a convex body K ⊆ R d is a (1 + o(1))n-self-concordant barrier on K, improving a seminal result of Nesterov and Nemirovski [30]. In fact this gives the first explicit construction of a universal barrier for convex bodies with optimal self-concordance parameter.
Definition 4.4 (Entropic barrier)
Given a convex body K ⊆ R d , define the entropic barrier for K as the real-valued function on the interior of K defined as: is precisely the maxium entropy dual program, up to negation of y in the expression.
Open questions still remain about the efficient computability of the entropic barrier. This is in particular true in the case where K is a polytope, given as a membership oracle. Towards this, the following is essentially a corollary to Theorem 4.2 (see Section 6.3 for a full proof), and can be used to efficiently compute the entropic barrier at points which are in the η-interior of K.
Corollary 4.13 (Bounding box for convex bodies)
Let Ω ⊂ R d be a convex body contained in a ball of radius R. Further, let A be an element of the η-interior of the convex hull of Ω. Then there is an optimal solution Y ⋆ to the dual program such that Y ⋆ ≤ poly(η −1 , d, log(R)).
Details of how this implies computability of the entropic barrier are omitted from this paper.
Technical overview
In this section, we give overviews of the proofs of the main results of this paper and compare our techniques with those of previous work. We start by describing the approach of [36] in the case of discrete uniform measures µ with finite support Ω ⊆ {0, 1} d . In this case, the marginals vector A of a measure ν on Ω is defined by setting A k to be the expected value of the kth entry of x when x is chosen according to ν. Note that the marginals vector A is always an element of hull(Ω). The problem the authors of [36] solve is described as follows: given a finite subset Ω and a desired marginals vector A in the η-interior of hull(Ω), compute the probability measure on Ω with marginals A which maximizes entropy. They consider the dual formulation which gives rise to measures on Ω of the following succinct form for some real vector y ⋆ : By strong duality ν = ν ⋆ is the entropy maximizing measure, and they then use the ellipsoid method to approximate y ⋆ . We generalize their approach to continuous measures µ on continuous domains Ω. For the most part, the ellipsoid algorithm can be applied in the same way as in the discrete case once we have the three main pieces in hand: (1) strong duality, (2) a bound on Y ⋆ , and (3) the strong counting oracle. Even in the continuous case, one can show that strong duality holds via a certain Slatertype condition (see Sections A.2 and A.3). What makes the passage from the discrete case to the continuous case much more interesting and nontrivial is proving the remaining two main pieces.
Proof overview: bounding box
The goal of this section is to explain the proofs of the main bounding box result and its corollaries. We first describe the approach of the discrete µ case discussed above. Note that for B ∈ hull(Ω), there exists some X 0 ∈ Ω such that Because µ is a discrete uniform measure, we have µ({X 0 }) = |Ω| −1 . This implies a bound on Y ⋆ as follows, via the dual objective function F A (Y ): The lower bound on F A (Y ⋆ ) above follows from restricting the integral (which is a sum in the discrete case) to the single point X 0 . This demonstrates exactly why this argument fails in the continuous case, because in that case we have µ({X}) = 0 for all X ∈ Ω. This is the first difficulty we must overcome. We need a way to restrict the dual objective integral to a region of Ω which has positive mass, emulating the role of atoms in the discrete case.
We introduce a two-parameter interior for the measure µ. We say that A is in the (η, δ)-interior of µ if every half-space intersecting the η-ball about A contains at least δ mass of µ (Definition 6.2). Instead of restricting the dual integral to a single point of Ω, we restrict it to the appropriate δ-mass to obtain a bound on Y ⋆ : We explain this formally in Lemma 6.1. This leads to the second difficulty. Our bounding box theorem only refers to the η parameter, and so we need a way to handle or control δ in terms of η and d.
Here is where the key balance property comes into play. We say that a measure µ is balanced if for all ε > 0 and X ∈ Ω, the ε-ball about X contains exp(−poly(ε −1 , d)) of the mass of µ (Definition 4.1). This links the two interiority parameters: from any point of the ε-interior of hull(Ω), there will be at least exp(−poly(ε −1 , d)) mass in the direction of any X ∈ Ω on the boundary.
The crucial feature of the balance property is then how this linking of the parameters allows one to transfer between them. Specifically for a balanced measure, the η-interior of hull(Ω) is contained in the ( η 2 , exp(−poly( η 2 , d)))-interior of µ. To see this, let A be in the η-interior of hull(Ω). Hence, any half space which intersects the η 2 -ball about A contains another η 2 -ball in hull(Ω). By translating this ball toward a point of Ω, we can assume that the half-space contains an η 2 -ball about a point of Ω. Since µ is balanced, this implies A is in the ( η 2 , exp(−poly( η 2 , d)))-interior of µ. At this point, the rest of the proof of Theorem 4.2 is straightforward. For balanced µ and A in the η-interior of hull(Ω), we actually have that A is in the ( η 2 , exp(−poly( η 2 , d)))-interior of µ. The two parameter bound described above then implies Y ⋆ ≤ poly( 1 η , d). To obtain bounding boxes for µ k on P k , n × n rank k projections, (Corollary 6.4) and to uniform measures on convex bodies (Corollary 6.6), we then demonstrate balance properties. In the case of µ k , P k ⊂ B √ k (0) can be covered by at most exp(poly(log δ −1 , n)) balls of radius δ for any δ > 0, morally because: Therefore a δ-ball about some point of P k must contain at least exp(−poly(log δ −1 , n)) of the mass of µ k , and unitary invariance then implies that this is actually true for all points of P k . For uniform measures µ on convex bodies K contained in a ball of radius R, we prove the bounding box using similar arguments as follows. By the volume ratio computation above, every δ-ball contained in K contains at least ( δ R ) d of the mass of µ. Therefore every A in the η-interior of hull(Ω) is also in the ( η 2 , ( η 2R ) d )-interior of µ, since every half-space intersecting the η 2 -ball about A contains another η 2 -ball in K. The bounding box then follows from the two-parameter bound discussed above (Lemma 6.1).
Proof overview: counting oracle for P 1 and V 1
The goal of this section is to explain why we can efficiently evaluate and compute the gradient of in the case of Ω = P 1 and Ω = V 1 First consider the case of Ω = V 1 , where µ is the pushforward of the Lebesgue measure through x → xx ⊤ . In this case we have a very explicit formula whenever Y is positive definite: Since µ is the pushforward of the Lebesgue measure through x → xx ⊤ , this expression follows from the following classical Gaussian integral formula: This is demonstrated formally in Proposition 9.3. We show how leads to our optimality characterization of the Goemans-Williamson measure at the end of this section. The above Gaussian formula for V 1 suggests a natural approach for computing E 1 on P 1 . Allowing complex Hermitian matrices, note that P 1 is the set of norm-1 elements of V 1 . Hence, we "integrate out" the norm of the elements of V 1 , in an attempt to obtain a similar formula for P 1 . We do this via a standard change of variables (equalities are up to scalar): This shows that this approach fails: that is, integrating out the norm does not provide us a formula for E 1 (Y ) (for more discussion see Section 9.2).
This demonstrates the first difficulty for constructing a counting oracle for P 1 . Normalizing the max-entropy measure on V 1 as above yields a measure on P 1 which is not a max-entropy measure. Max-entropy measures on P 1 an V 1 are therefore fundamentally different objects, and thus constructing the associated counting oracles requires different techniques. In particular the well-known Gaussian integral formulas cannot help us in the case of P 1 .
The remarkable fact is then that max-entropy measures on P 1 can be translated into maxentropy measures on a very simple polytope: the standard simplex in R n . We have the following equality for real Y = diag(y), where m is the Lebesgue measure on the simplex ∆ 1 := {p ∈ R n + : Put another way, max-entropy measures on P 1 , a nonconvex manifold, correspond to max-entropy measures on ∆ 1 , a convex polytope. To see this, first note the following for any m 1 , . . . , m n . The first equality is the Bombieri inner product formula (Lemma 7.2), and the second inequality is a basic induction after a change of variables: The exponential equality then follows from limiting, since P 1 and ∆ 1 are compact and since e − Y,X and e − y,x are limits of polynomials. This argument also implies the more general fact: that m is the pushforward of µ 1 through the map φ : X → diag(X): This transfer to the simplex now leads to an explicit computation for E 1 (Y ) when Y = diag(y). (Considering diagonal Y is actually without loss of generality, see the discussion in Section 7.) By making a change of variables, the simplex integral is an iterated convolution: This is stated formally in Lemma 7.3. Applying the Laplace transform L converts this convolution into a partial fraction decomposition problem for distinct values of y i : Computing the values of c i via a standard partial fractions formula gives: . This is stated formally in Proposition 7.4. Here M (−y) is a Vandermonde-like matrix which arises when forming the common denominator of the last expression, given (for the case of distinct y i s) as follows: We define this matrix formally in Definition 7.1. This brings us to the second difficulty for constructing a counting oracle for P 1 . When the values of y i are not distinct, then the denominator vanishes and this formula cannot be used. Even though E 1 is continuous, this could still be a major problem: if for example the gradient of E 1 (Y ) becomes large as y i approaches y j , then computing E 1 (Y ) could become computationally infeasible.
To handle this difficulty, we take limits by successively applying L'Hopital's rule. One iteration for y 1 = y 2 goes as follows: .
The key observation here is the fact that the numerator is still a determinant, due to the fact that only one column of M (−y) depends on y i for all i. Applying L'Hopital's rule as many times as is necessary leads to the following, where λ i represent the distinct values of y with multiplicities m i : Note that M (−λ) is a matrix similar to M (−y) above which handles the non-distinctness (we unify the notation of these matrices in Definition 7.1). A similar expression for the gradient is achieved using the same techniques, and so we state it here without further detail: M p (−λ) is another, similar Vandermonde-like matrix, see Proposition 7.6 and Definition 7.2.
Since the entries of M (−λ) and M p (−λ) have bit complexity polynomial in n and the bit complexity of e −y i , their determinants have the same bit complexity. Therefore these formulas, for E 1 (Y ) and its gradient, lead to an efficient counting oracle for P 1 .
The optimality of Goemans-Williamson measure. As a consequence of Equation (1), we now show briefly how this formula is used to prove that the Goemans-Williamson measure µ GW with respect to a real symmetric positive definite matrix A is a max-entropy measure on V 1 . For A = V V ⊤ , the measure µ GW is defined to be distributed according to xx ⊤ := (V g)(V g) ⊤ where g is a standard Gaussian in R n . By the change of variables formula, xx ⊤ is distributed as follows on We state this formally in Proposition 9.1. To prove that this is a max-entropy measure, we determine the critical point of the dual objective with respect to real symmetric positive definite Y : Proof overview: sampling for P 1 . We now discuss how to sample from max-entropy distributions on P 1 . Our main algorithm (Theorem 4.4) gives an efficient oracle for approximating the max-entropy density function: The main problem is that it is not at all clear how to use such a density function to sample from a manifold. We avoid this difficulty by transferring the problem of sampling to the simplex ∆ 1 for Y ⋆ = diag(y ⋆ ), using the following fact discussed in the previous section: The sampling process for P 1 then occurs in two parts. First, we sample from the max-entropy distribution on the simplex, one coordinate at a time. We use the right-hand side of the above expression to compute the cumulative density function (CDF) for each coordinate, conditioned on the previously sampled coordinates. Formulas and computations for these conditioned CDFs are very similar to that of the counting oracle, and hence we omit them here (see Corollary 7.13).
Once we have a sample x on the simplex, we need to convert it into a sample on P 1 by considering its inverse image under the map φ : X → diag(X). The difficulty that now arises is the fact that there are many elements of P 1 which map to the same simplex element under φ.
Fortunately, there is a principled way to select from these possibilities. The fiber φ −1 (x) is an orbit of the action of diagonal unitary matrices on P 1 by conjugation. Since Y ⋆ is diagonal, this implies the max-entropy measure ν(X) is uniform when restricted to φ −1 (x). Given x, we then sample X from φ −1 (x) by picking an arbitrary X 0 ∈ φ −1 (x) and conjugating by a uniformly random diagonal unitary matrix.
Hence, to sample X from P 1 we (1) sample x from the simplex, and then (2) sample X uniformly from φ −1 (x). This samples X from the correct measure due to the disintegration theorem, which says the following for any f : That is, the measure µ 1 can be split into measures on ∆ 1 and on the fibers φ −1 (x) (see Proposition 7.11). Therefore, the above sampling process efficiently samples the max-entropy measure on P 1 with density ν(X).
Proof overview: extending the counting oracle for P 1 to P k
For the case of P k and µ k , we want to generalize the formulas of the k = 1 case. To do this, we make use of the famous Harish-Chandra-Itzykson-Zuber formula (Theorem 7.7) for integrals over the Haar measure of the unitary group U (n). It is stated as follows for Hermitian Y, B with distinct eigenvalues y i , β i : . . . , 1, 0, . . . , 0) with k 1s and n − k 0s, notice that P k = {U BU * : U ∈ U (n)}. This leads to the following: To handle the issue of the denominator vanishing, and to compute the gradient, we apply all the same techniques which were required for the k = 1 case (see Corollaries 7.8 and 7.9). These formulas end up having the right bit complexity, and so they immediately imply an efficient strong counting oracle for P k . Unlike in the case of k = 1, the problem of sampling in the case of k > 1 is more difficult as the image of P k under the map φ : X → diag(X) is much more complicated. Thus we leave as an open problem the question of sampling from the associated maximum entropy distributions in the case of P k for k > 1.
Bounding box
In this section, we prove the general bounding box result (Theorem 4.2). With this, we then specialize to the cases of rank-k projections and convex bodies.
General bounding box
In what follows we will discuss "interiors" of a probability distribution µ given by two parameters, (η, δ). The η parameter will control how far we are from the boundary, and the δ parameter will control how well-distributed µ is on its support. At the end of the day, we will prove that for nice situations one only needs to consider the η parameter (as in the bounding box result of [36]).
We now define the two-parameter interior. In what follows, we will let V L be the vector subspace given by L(X) = 0, where L(X) = B is the maximal set of linearly independent equality constraints for Ω. More informally, V L is the vector space corresponding to the minimal affine space in which K = hull(Ω) lives (i.e., translate the affine space so that 0 ∈ V L ). The fact that L(X) = B is a maximal linearly independent set means that the optimal solution to the dual program is unique when restricted to V L . (Existence follows from Lemmas A.1 and A.3.) We discuss this further in Section 8.
Definition 6.1
We define the (0, δ)-interior of µ to be the set of all A ∈ K such that for all Y ∈ V L we have: Morally, this says that every closed half-space containing A contains more than δ of the mass of µ. Note that this is not always an open set (which is perhaps a bit odd for something called the "interior", but this will be our convention).
Definition 6.2 (Two-parameter interior)
We define the (η, δ)-interior of µ to be the set of all A ∈ K such that the ball of radius η about A is contained in the (0, δ)-interior of µ. Note that this is not necessarily an open set.
The next lemma is then precisely how to combine the two parameters to get a bounding box for the optimal solution to the dual program.
Lemma 6.1 (Two-parameter bounding box) Given A ∈ K, let Y ⋆ ∈ V L be the optimal solution to the dual program. Recall the dual objective: This gives the bound: On the other hand, plugging in Y = 0 gives an upper bound on the optimal value of the above dual program: Rearranging this gives the result.
This gives us a good way of bounding solutions corresponding to interior points of K. In general however, trying to get a bound on the δ parameter of the interior is much more difficult than that of the η parameter. To deal with this we define a property of µ which allows us to only have to consider the η parameter.
Definition 6.3 (δ-balanced measure)
We say that µ is δ-balanced if for any X ∈ Ω, we have that at least exp(−poly(δ −1 , d)) of the mass of µ is contained in the δ-ball about X (where d is the dimension of K). If f is the polynomial in the exponent (i.e., exp(−f (δ −1 , d))), then we say that µ is δ-balanced with bound f .
We now prove the main bounding box theorem for such balanced measures. We then use this to obtain a bounding box for rank-k projections and for convex bodies in the following sections.
Theorem 6.2 (Bounding box for balanced measures)
Proof: We first show that the ( η 2 , 0)-interior of µ is contained in the (0, exp(−f ( 2 η , d)))-interior of µ. To see this, let A 0 be some element of the ( η 2 , 0)-interior of µ. Then any closed half-space containing A 0 also contains an η 2 -ball about some X ∈ Ω. That is, for every Y ∈ V L there exists X such that: Since µ is η 2 -balanced, we have that exp(−f ( 2 η , d)) of the mass of µ is contained in the η 2 -ball about X. This implies: Remark 6.3 Note that Theorem 6.2 is immediately applicable to uniform discrete measures on (singly) exponentially sized sets S. In particular, such a measure is automatically balanced with constant bound f = log |S|.
Rank-k projections
We now prove bounding box result for P k , by showing that µ k is balanced and applying the previous theorem. Note that in this case L(X) = B reduces to Tr(X) = k, and so V L is the set of traceless Hermitian matrices in this case. Corollary 6.4 (Bounding box for P k ) Let µ k be the uniform distribution on P k . Then given A in the (η, 0)-interior of µ k , the optimal traceless solution Y ⋆ of the corresponding dual program is Proof: We prove that µ k is balanced and then apply the previous proposition. The number of balls of size δ required to cover the unit ball in R n 2 (with Euclidean/Frobenius norm) is at most (2n/δ) n 2 . Since the set of projections of rank k is contained in the sphere of radius √ k, we have that it requires at most (2n √ k/δ) n 2 δ-balls to cover all such projections. With this, there exists some δ-ball (call it B δ ) in this cover which contains at least (2n √ k/δ) −n 2 of the mass of µ k . Pick some X ∈ P k ∩ B δ , and let B 2δ (X) be the ball of radius 2δ which is centered at X. Thus, in fact B 2δ (X) contains at least (2n √ k/δ) −n 2 of the mass of µ k . By unitary invariance of µ k , we have that the ball of radius 2δ about any point of P k contains at least (2n √ k/δ) −n 2 of the mass of µ k . That is, µ k is δ-balanced with bound f (δ −1 , n) = n 2 log(4n √ k · δ −1 ) for all δ > 0. Applying the previous proposition then gives the result.
Remark 6.5
In the discrete measure case, the authors of [38] were able to improve the dependence on η of the bounding box from η −1 to log(η −1 ). This leads to a max-entropy approximation algorithm which does not depend on η. One may then naturally ask whether or not this is possible for the bounding box for µ k discussed here. The answer turns out to be "no", and this can be seen by considering the optimal Y ⋆ = diag(y 1 , y 2 ) in the case of n = 2 and k = 1. Specifically one can show that for A = diag(η, 1 − η), the value of |y 1 − y 2 | is of the order η −1 as η → 0. Since the relative entropy of the optimal distribution is unbounded as η approaches 0, approximation of Y ⋆ cannot help us to improve the dependence of |y 1 − y 2 | on η −1 .
Convex bodies
We now prove bounding box result for convex bodies. Instead of applying the previous theorem directly, we make some simpler computations which are in the same spirit.
Corollary 6.6 (Bounding box for convex bodies) Let µ be the uniform distribution on a ddimensional convex body Ω contained in a ball of radius R. (Note that K = hull(Ω)
= Ω in this case.) Then given α in the (η, 0)-interior of µ, the optimal solution y ⋆ ∈ V L of the corresponding dual program is such that y ⋆ ≤ 2d η log( 4R η ).
Counting oracle for P k
In this section, we prove existence of a strong counting/integration oracle for the objective function of the dual program Dual k . Recall the dual objective function: We want to be able to efficiently compute this function and its gradient. In this case of rank-k projections, we make the simplifying assumption that Y and A are both diagonal. This simplification is actually without loss of generality, due to the Schur-Horn theorem (Corollary B.2) and unitary invariance of µ k . Further, it is enough to consider only E k (Y ) (which is independent of A) since Y, A is linear and hence easy to handle. This leads to the main theorem of this section, stated originally as Theorem 4.5.
Theorem 7.1 (Counting oracle for P k )
There is an algorithm that, given n ∈ N, k ∈ [n], an n × n real diagonal matrix Y = diag(y), and a δ > 0, returns numbersĒ,Ḡ such that where E k is the exponential integral defined above (and in Definition 2.2). The running time of the algorithm is polynomial in n, log( 1 δ ), and the number of bits needed to represent e −y i for any i ∈ [n].
The main tool we use to prove this theorem is a collection of explicit formulas for computing E k and its gradient. We first discuss this in full detail for the case of k = 1. After that, we discuss how to generalize the arguments to the k > 1 case.
Algorithm for k = 1
In this section, we construct the strong counting/integration oracle for rank-1 projections by giving formulas for the function E 1 and its gradient (Propositions 7.4 and 7.6). Specifically, for diagonal Y = diag(y) with distinct entries λ 1 > · · · > λ k with multiplicities m 1 , . . . , m k , we can compute the following where M (y) and M p (y) are matrices defined below: The only potentially hard part of computing these expressions is computing the determinants of M (−y) and M p (−y). It is a standard fact that one can compute a determinant in time polynomial in the number of bits needed to represent the matrix, so we just need to demonstrate that the matrices have the necessary bit complexity. Considering Definitions 7.1 (for γ = 1) and 7.2 below, we see that the matrix entries depend on computing e −y i , n!, and y n i . All of these can be computed in time polynomial in n and number of bits needed to represent e −y i , which is exactly what we need. That said, all we have left now is to prove the two formulas stated above, and we do this in the following sections.
Evaluating the dual integral
We now prove the main evaluation formulas for integrals on the manifold P 1 . Throughout we will often consider integrals on the unit sphere in C n , denoted S n C , instead of on P 1 directly, and we will let µ S n C refer to the Haar measure on the unit sphere. Note that the transfer of formulas from the sphere to P 1 is straightforward, as given by (2) of Proposition 7.4. First, we define a parameterized matrix of a particular form which will show up many times in our computations. Definition 7.1 (Matrix for dual integral, k = 1) Given y 1 , . . . , y n ∈ R, let λ 1 < · · · < λ k denote the distinct values of y i with multiplicities m 1 , . . . , m k . Given γ, we define an n × n matrix M (y, γ) as follows: We also define M (y) := M (y, 1). Note that only one row of M (y, γ) depends on γ.
We now state a lemma which gives the most basic result about integrals on P 1 . Specifically, we state a well-known result for integrals of polynomial-like functions. This proof is very related to the unitarily invariant inner product on homogeneous polynomials, which has many names in the literature: Bombieri inner product, Fischer-Fock inner product, Segal-Bargmann inner product, etc. The following lemma is standard, see e.g. Lemma 3.2 of [33].
The next lemma then shows the connection between the integrals we want to compute and the Laplace transform. As an immediately corollary, we obtain equality of (3) and (4) in Proposition 7.4 below in the case of distinct values of y 1 , . . . , y n .
Proof: We first compute: .
Plugging in t = 1 gives the first equality in the second statement. To see the last equality, notice that because y 1 < · · · < y n , the expression for det(M (y)) will be a sum of exponentials multiplied by Vandermonde determinants (expand along the last row of M (y)). The result follows, taking care to keep track of signs.
We now state and prove the full evaluation formula for P 1 . The two most involved parts of the proof are showing equality of (1) and (4) on polynomials and showing equality of (4) and (5) for non-distinct values of y 1 , .., y n .
Proposition 7.4 (Evaluating the dual integral, k = 1) Fix n ∈ N, and let µ S n C , µ 1 , µ ∆ 1 be the uniform probability distributions on the complex unit sphere in C n , on P 1 , and on the standard simplex in R n , respectively. For a given analytic function f on the standard simplex the following expressions are equal: 3.
If f (x) = e y,x for some real y with distinct entries λ 1 < · · · < λ k with multiplicities m 1 , . . . , m k , then we have another equal expression: Proof: First, for the equality of (1) and (2), note that µ 1 is the pushforward measure of µ S n C through the map ψ : S n C → P 1 given by ψ : v → vv * . (To see this, note that ψ is unitarily invariant and µ S n C and µ 1 are the unique unitarily invariant measures on their respective domains.) With this, we then have: That is, (1) and (2) are equal. Next, the equality of (3) and (4) follows from the fact that the map between the two domains of integration (both of which are simplices) is affine. Therefore the determinant of the Jacobian is a constant, and so we only need to integrate over a constant function to determine that constant. A simple induction shows that it is (n − 1)!.
To prove the equality of (1) and (4), we compute the integrals on a given monomial x m := x m 1 1 · · · x m n−1 n−1 (1 − x 1 − · · · − x n−1 ) mn . First, by Lemma 7.2 we have: The last equality is due to Lagrange interpolation, considering the sum as a function of n. We further have: That is, we have equality whenever n = 2, proving the base case. The rest of the proof goes by induction. First we compute for α = 1 − x 1 − · · · − x n−2 : With this, we then compute the following by induction, letting β = 1 − x 1 − · · · − x n−3 : This completes the proof of equality of (1) and (4). Finally, we prove the equality of (4) and (5) for f (x) = e y,x . Note that if y 1 < · · · < y n , then the result follows from the previous lemma. Otherwise, the expression in (4) (for this function f ) is continuous in y 1 , . . . , y n , and so we can limit the expression for distinct eigenvalues. That said, we let y ′ 1 < · · · < y ′ n be distinct values near to the y i , and we apply L'Hoptial's rule to det(M (y ′ )) based on the multiplicities of the y i . Specifically, for each i ∈ [k] we apply the following differential operator to numerator and denominator (let ∂ i := ∂ y ′ i ): The powers here correspond to the number of terms of the denominator of det(M (y ′ )) i<j (y ′ j −y ′ i ) which will vanish when the m i values of y ′ 1 , . . . , y ′ n limit to λ i . That said, we now want to compute: We first compute the denominator via the product rule, noting that the only nonzero term occurs whenever all derivatives from a given D i are applied to differences of eigenvalues corresponding to λ i : We next compute the numerator using the fact that exactly one row of the matrix depends on any given y ′ i , as so we can apply the derivatives to the appropriate rows. Further, this means the numerator can still be expressed as a determinant. We also incorporate the factorials in the denominator expression above, by dividing each column by the appropriate factorial: This gives the result.
Computing the gradient
We now compute the gradient of E 1 (Y ) for Y = diag(y) using the above formulas. The first thing to note is that we can use an argument similar to what we used in the proof of the evaluation formula. Specifically, note the following expression where ∂ y l := ∂ ∂y l : In particular, we obtain the following bound where y 1 ≥ · · · ≥ y n are the entries of diagonal Y : From these observations, we have that ∂ y l E 1 (Y ) is continuous on diagonal matrices Y . Therefore, to compute the gradient we can first assume that y l is distinct from the other diagonal entries, and then limit via L'Hopital's rule (as in the proof of the evaluation formula). We do exactly this to prove the gradient formula, after defining another parameterized matrix.
Definition 7.2 (Matrix for gradient formula, k = 1) Given y 1 , . . . , y n ∈ R, let λ 1 < · · · < λ k denote the distinct values of y i with multiplicities m 1 , . . . , m k . Given p ∈ [k], we define an n × n matrix M p (y) as the matrix which differs from M (y) in one column, given as follows: That is, ∂ λp mp is applied to the right-most column of M (y) that depends on λ p . Proposition 7.6 (Gradient formula, k = 1) Assume y 1 , . . . , y n are the diagonal values of diagonal Y , with distinct values λ 1 > · · · > λ k and multiplicities m 1 , . . . , m k . Letting p be such that y l = λ p , we have the following expression: Proof: We first assume that y l is distinct from λ p , and then we limit at the end. Specifically, we assume the distinct values of y 1 , . . . , y n are λ 1 > · · · > λ p > y l > λ p+1 > · · · > λ k , where now the multiplicity of λ p is now one less than it was originally. We let y ′ denote these new values of y (with y l possibly changed) and let m ′ i denote these new multiplicities (only m p decreased by 1). We now want to compute: It is at this point that we limit y l to λ p and use the L'Hopital's rule argument. (Recall the above discussion which describes why this argument is valid.) We want to apply this argument to the following part of the above expression: The key is to notice that the denominator contains exactly m ′ p + 1 = m p factors of (λ p − y l ) up to scalar, where m ′ p factors come from the determinant. With this, we apply ∂ y l to the numerator and denominator 2m ′ p times and then set y l = λ p . Computing this for the denominator is straightforward: The computation is easy here for the same reason as in the proof of Proposition 7.4: using the product rule for all the derivatives only leaves a single term which does not evaluate to zero once we set y l = λ p . A similar thing happens for the numerator, which yields: With this, we have the following expression: This completes the proof.
Algorithm for k > 1
We now discuss how to generalize the formulas and arguments from the rank-1 case to the rank-k case. The computations done here are very similar to those given above, and so we will be a bit less explicit in what follows. And although the matrices involved are a bit more complicated (see Definitions 7.3 and 7.4), we still achieve the required bit complexity bounds. Specifically, each of the entries of these matrices require a polynomial number of computations of m!, y m i , and e −y i for m ≤ n, and so the determinants can still be computed as efficiently as is necessary for Theorem 7.1.
We now state the explicit integral formulas for E k and ∇E k which generalize those of the k = 1 case of the previous section. Our main tool to prove these formulas is the Harish-Chandra-Itzykson-Zuber formula ( [21], [24]), given as follows.
Theorem 7.7 (HCIZ formula) For n × n Hermitian matrices Y and B with distinct eigenvalues y 1 < · · · < y n and β 1 < · · · < β n respectively, we have the following where µ is the Haar measure on the unitary group U (n): .
Using the L'Hoptial's rule argument used in the rank-1 case, we can limit B to the rank-k PSD projection diag (1, . . . , 1, 0, . . . , 0) to obtain a formula for E k (Y ) for Y with distinct eigenvalues. Using again the same sort of argument, we can then limit Y to any real diagonal matrix (eigenvalues not necessarily distinct). First, we need to define a parameterized matrix as in the rank-1 case. Definition 7.3 (Matrix for dual integral, k > 1) Given y 1 , . . . , y n ∈ R, let λ 1 < · · · < λ k denote the distinct values of y i with multiplicities m 1 , . . . , m k . Let the polynomial q i,j (t) be defined as follows: We define an n × n matrix M (k) (y) as follows: Also, any term of the form λ m i for m < 0 in the above matrix should be replaced by 0.
The matrix defined above and the arguments of the previous section then allow us to write down an explicit formula for E k (Y ).
Corollary 7.8 (Evaluating the dual integral, k > 1)
Let Y be an n × n Hermitian matrix Y with eigenvalues y 1 , . . . , y n and distinct eigenvalues λ 1 > · · · > λ k with multiplicities m 1 , . . . , m k . We have the following: This leads to a formula for E k (Y ): Notice that this reduces to (5) of Proposition 7.4 whenever k = 1. As in the k = 1 case, we use the Schur-Horn theorem and unitary invariance to restrict the inputs of E k (Y ) to real diagonal matrices (see Section B). Therefore, we only need to compute the gradient on the diagonal entries of Y . The arguments are essentially the same as those of the k = 1 case, again via L'Hoptial's rule, and so we state the gradient formula for E k as a corollary without proof. First though, we need to define another parameterized matrix for the gradient formula, as in the k = 1 case. Definition 7.4 (Matrix for gradient formula, k > 1) Given y 1 , . . . , y n ∈ R, let λ 1 < · · · < λ k denote the distinct values of y i with multiplicities m 1 , . . . , m k . Let q i,j (t) be defined as in Definition 7.3. Given p ∈ [k], we define an n × n matrix M (k) p (y) as the matrix which differs from M (k) (y) in one column, given as follows: · · · · · · e λp q 0,0 (λ p ) e λp q 0,1 (λ p ) · · · e λp q 0,mp−2 (λ p ) e λp q 0,mp (λ p ) · · · · · · e λp q 1,0 (λ p ) e λp q 1,1 (λ p ) · · · e λp q 1,mp−2 (λ 1 ) That is, ∂ λp mp is applied to the right-most column of M (k) (y) that depends on λ p . As in Definition 7.3, any term of the form λ m i for m < 0 in the above matrix should be replaced by 0.
Sampling from P 1
Given some real diagonal matrix Y as in the previous section, we want to be able to sample from the measure on P 1 given by e − Y,X dµ 1 (X). It is not immediately obvious how to do this on P 1 itself, so we instead transfer the measure to a simpler domain. Specifically, we use the Proposition 7.4 to transfer the sampling problem to the simplex. Once on the simplex, we can apply standard techniques via the coordinate-wise cumulative distribution function (CDF). That said, we now state the sampling process for P 1 and then use the rest of the section to fill in the details. Proposition 7.10 (Rank-one Sampling) Let Y = diag(y) be a real diagonal n × n matrix. The following process produces samples from the measure e − Y,X dµ 1 (X) on P 1 .
Proof:
We give a proof sketch here, leaving the details to the remainder of this section. First, the reason we are able to reduce to sampling on the simplex is due to Proposition 7.11. Specifically, let Φ : P 1 → ∆ 1 be given by Φ : X → diag(X). Then for any v ∈ ∆ 1 , we have the following where T is the complex unit circle: The fact that e − Y,X dµ 1 (X) is invariant under the action of conjugating X by diag(z) for z ∈ T n then implies that we can uniformly sample z from T n via Proposition 7.11. Second, sampling from the measure e − Y,X dµ 1 (X) is nontrivial, but doable by sampling each coordinate conditioned on the previous coordinates sampled. To do this we need to be able to efficiently compute the cumulative density function (CDF) for the conditioned measures, and we discuss how to do this below. Once we have this, we can sample each conditioned coordinate using standard techniques; see [31], Section 4.5.
For the case of µ k for k > 1, we leave the question of sampling from the associated maximum entropy distributions as an open problem.
Transferring to the simplex. To transfer sampling from P 1 to sampling from the simplex, we need a way of applying pushforward to sampling. The way to do this is via disintegration (see [9]), which we discuss in the following result. Proposition 7.11 (Pushforward sampling) Let X, Y be separable complete metric spaces, and let µ, ν be probability measures on X, Y respectively. Let Φ : X → Y be a map such that ν is the pushforward measure of µ. Further, for any y ∈ Y , let µ y denote the measure on the fiber Φ −1 (y) given by disintegration: i.e., such that X f (x)dµ(x) = Y Φ −1 (y) f (x)dµ y (x)dν(y) for all measurable f (see [9]). Then the measure on X generated by sampling y from (Y, ν), followed by sampling x from (Φ −1 (y), µ y ), is equal to µ.
Proof: Let γ denote the measure on X generated by the described two-step sampling process. For any measurable set A we have the following, where P 1 and P 2 denote the probabilities according to the first and second steps of the process respectively: The second equality is just by definition of conditional probability. We then further have:
That is, γ(A) = µ(A).
Computing the conditioned CDF. We now compute the conditioned CDF for each coordinate of the measure on the simplex in Corollary 7.13, after a necessary lemma. Note that the formula below in Corollary 7.13 is not given in full explicit detail. However, the formula is still a constant times a determinant of a matrix, and expressions are given for the entries of that matrix. They are in fact rational functions of polynomials in β, y i , e y i β , and factorials at most n (see below). Therefore, the whole determinant is computable in time polynomial in n and the number of bits needed to represent e y i β .
Proof: The proof follows from a simple substitution (u i = x i 1−γ ), applying Proposition 7.4, and then multiplying and dividing factors of (1 − γ) in the rows and columns of M ((1 − γ)y) to obtain M (y, 1 − γ).
To see this, we apply the change of variables u i = x i 1−γ and Proposition 7.4: Note now that we can do the following to M ((1 − γ)y) to make it so that only one of its rows depends on γ (recall the definition of M (y) from Definition 7.1). First, divide the ith row of the matrix by (1 − γ) i−1 up to i = n − 1. Then, for any λ p multiply the jth column depending on λ p by (1 − γ) j−1 . Only the last row of the matrix obtained will depend on γ, and in fact this matrix is precisely M (y, 1 − γ). The process described above is equivalent to pulling out of the determinant a factor of (1 − γ) with the following exponent: With this have that since k p=1 m p = n. The result follows.
Corollary 7.13 (Conditioned CDF formula)
Fix y 1 , . . . , y n ∈ R, and let λ 1 < · · · < λ p be the distinct values of y k+1 , . . . , y n with multiplicities m i . Further, fix x 1 = α 1 , . . . , x k−1 = α k−1 and let α := k−1 i=1 α i . Also, let x n := 1 − α − x k − · · · − x n−1 . The CDF denoted F k (β) for the simplex distribution e y,x , conditioned on the given values of x 1 , . . . , x k−1 , is given as follows for β ∈ [0, 1 − α] and y ′ := (y k+1 , . . . , y n ): Recall the definition of M (y, γ) from Definition 7.1. Since only the last row of M (y ′ , 1 − α − x k ) depends on x k , the above integral can be passed to that row and computed explicitly when y k = λ l : If y k = λ l , we have the simpler expression, Proof: We have: We compute the inner expression using the previous lemma and γ = α + x k : This then implies: Since only one row of M (y ′ , 1− α− x k ) depends on x k , we can compute the above integral entrywise on that row by linearity (after multiplying that row by the e y k x k factor). We now compute the final expression of the result, removing subscripts to simplify notation. First we make the change If λ = y, then we simply obtain e (1−α) . Otherwise, we use integration by parts to obtain:
Computing maximum entropy measures
In this section we describe the entire algorithm for computing the optimum Y ⋆ for the dual program Dual µ (A), given some A ∈ K = hull(Ω). The algorithm is essentially an application of the ellipsoid algorithm, based on a bounding box and a strong counting/integration oracle. We first discuss this algorithm in general, and then apply it to specific cases based on results from the previous sections. Before moving on, we discuss how the linear equality constraints L(X) = B come into play here. We want to restrict our search space to the vector space V L defined as the set of all X such that L(X) = 0. The main reason for this is, since the constraints given by L(X) = B pick out an affine space in which K is full dimensional, restricting the search space to V L causes the optimum Y ⋆ to be unique. Further, the bounding box results above apply specifically to this particular Y ⋆ .
Since we are given L effectively and explicitly, we assume for the ellipsoid algorithm that we can project the gradient (given by the strong counting oracle) onto V L . That said, we will from now on assume V L to be the domain in which we are optimizing.
The ellipsoid framework
Using the standard argument via Hölder's inequality, we have that the dual objective function is convex: With this, the main optimization tool we use to approximate the the dual optimum Y ⋆ is the ellipsoid algorithm. Recall the following from [36] Theorem 2.13, which was essentially taken from [5].
Theorem 8.1 (Ellipsoid algorithm)
Given any β > 0 and R > 0, there is an algorithm which, given a strong first-order oracle for F A , returns a Y • ∈ V L such that: The number of calls to the strong first-order oracle for F A is bounded by a polynomial in d, log R, and log(1/β). Here, d is the dimension of the ambient Hilbert space in which Ω lies.
We now prove the main theorem (Theorem 4.4) regarding the existence of an algorithm for approximating the optimum to the dual objective.
where F A is the objective function for the dual program Dual µ (A), and Y ⋆ ∈ V L is the optimum of the dual program. The running time of the algorithm is polynomial in d, η −1 , log(ε −1 ), log(r), and the number of bits needed to represent A, L, and B.
Proof: To apply the ellipsoid algorithm, we need to choose the two parameters, β and R. Since µ is balanced with some polynomial bound f , we choose for R the bounding box given for balanced measures in Theorem 4.2: So, the set {Y ∈ V L : Y ≤ R} ⊂ {Y ∈ V L : Y ∞ ≤ R} contains the optimal Y ⋆ for the dual program. Next, we need to choose β. Note that for Y ∞ ≤ R we have: Therefore, choosing β := ε 4rR √ d implies: The ellipsoid algorithm then guarantees a Y • such that: The number of calls to the strong counting oracle is polynomial in d, log(R) = log(2η −1 · f (2η −1 )) and log(1/β) = log(4rR √ dε −1 ). Given the bounding box, each oracle call (now including computing Y, A ) can be implemented in time polynomial in d, η −1 , and the number of bits needed to represent A. This completes the proof.
Rank-k Projections
Next we apply the above result to the case of Ω = P k and µ = µ k , i.e., the case of rank-k projections. To do so we make a few tweaks to the proof of the theorem for the general algorithm given in the previous section. In particular, even though our domain P k lies in the space of Hermitian matrices, our strong counting oracle for E k only applies to real diagonal matrices Y . That said, we now state the theorem for rank-k projections and discuss such issues in the proof.
Corollary 8.3 (Main algorithm, P k case)
There exists an algorithm that, given n ∈ N, k ∈ [n], A in the η-interior of P k , and any ε > 0, returns Hermitian Y • such that where F A is the objective function for the dual program Dual k (A), and Y ⋆ is an optimum of the dual program. The running time of the algorithm is polynomial in n, η −1 , log(ε −1 ), and the number of bits need to represent A.
Proof:
The result essentially follows from the general case, with a few details that need to be dealt with. First, the maximal linear equalities for P k boils down to something very simple within the space of Hermitian matrices. It is simply given by Tr(X) = k. Thus, our search space V L then becomes the set of traceless Hermitian matrices.
Next, by unitary invariance of µ k we can assume A is diagonal by unitary conjugation. Once we obtain an approximate optimum Y • for the diagonalized A, we can obtain an approximate optimum for the original A via conjugation by this unitary. Next, by the Schur-Horn theorem (see §B and the discussion at the start of §7) we can further assume that Y ⋆ is diagonal. That is, we can assume A is real diagonal and restrict the domain of F A (Y ) to real diagonal matrices Y .
Once we make this simplifying assumption, we have access to a strong counting/integration oracle for E k (Y ) by Theorem 4.5. The proof for the general case then goes through (using this strong counting oracle and the bounding box result for rank-k projections), giving the desired result.
The Goemans-Williamson measure
We discuss two main features of the pushforward through v → vv ⊤ of the Goemans-Williamson measure which are relevant to this paper. We abuse notation in this section by letting µ GW refer to the pushforward measure on V 1 . First, we prove that this measure is a max-entropy measure with respect to V 1 . Second, we demonstrate that this measure cannot be interpreted as a max-entropy measure on P 1 . This second point demonstrates the fundamental difference between mex entropy measures on V 1 and P 1 .
Goemans-Williamson measure on V 1 maximizes entropy
In this section, we demonstrate how the measure associated to the Goemans-Williamson SDP rounding scheme can be interpreted as a max-entropy measure. We describe it formally as follows. 1. Sample a random standard Gaussian vector g from R n .
Return the rank-1 PSD matrix
The measure associated to this sampling process we refer to as the Goemans-Williamson measure and denote it µ GW . This measure is supported on the rank-1 real symmetric PSD matrices, which is the set of extreme points of the real symmetric PSD cone. Now let m be the Lebesgue measure on R n , and let µ be the measure on the real symmetric PSD cone which is the pushforward of m through the map Φ : x → xx ⊤ . With this we can also give an explicit description of the Goemans-Williamson measure.
Proposition 9.1 (Goemans-Williamson density function)
The Goemans-Williamson measure on the set of rank-1 real symmetric PSD matrices is given by where µ is the pushforward of Lebesgue measure through x → xx ⊤ .
Proof: Let A = V V ⊤ as in the definition of µ GW . Since a standard Gaussian g is distributed according to e − 1 2 g 2 dm(g), we can apply the change of variables formula to determine how x := V g is distributed. We have: Considering the pushforward of this measure through x → xx ⊤ gives the desired result.
Note that strong duality then immediately implies µ GW is a max-entropy measure with respect to µ, since its density function is of the correct form. To demonstrate this more concretely, we prove this explicitly below via an explicit formula E µ (Y ). First, the following observation tells us that it is sufficient to restrict E µ (Y ) to positive definite Y .
Proof: Since X is PSD, we have that Y ≺ Z implies − Y, X ≥ − Z, X . Hence, to prove the result, we only need to show it for singular PSD matrices Y . Further, unitary invariance means we can restrict to diagonal Y . So, assume Y = diag(0, y 2 , . . . , y n ) for y i ≥ 0. Now consider: Note that the inner integrand above does not depend on x 1 , and so the evaluation of the inner integral yields some positive (possibly infinite) constant C as written above.
We now give an explicit formula for E µ (Y ) on positive definite Y .
Proposition 9.3 (Lebesgue evaluation formula)
We have the following explicit expression for E µ (Y ) for n × n real symmetric positive definite Y : Proof: Since µ is the pushforward measure of m through x → xx t , we have: The second equality is computed via the density function of the multivariate Guassian.
This then leads to the main result of this section.
Corollary 9.4 (Max-entropy, SDP rounding)
Given an n × n real symmetric positive definite marginals matrix A, the Goemans-Williamson measure µ GW is the max-entropy measure with respect to µ, the pushforward through x → xx ⊤ of the Lebesgue measure on R n . That is, µ GW is the optimal measure for Prim µ (A).
Proof: Proposition 9.3 gives the following explicit expression for E µ (Y ) with n × n real symmetric positive definite input Y : By a standard computation, we then have the following: This implies the following regarding the gradient of the dual program objective Dual µ (A) for positive definite A: That is, Y ⋆ = 1 2 A −1 is the optimum for the dual program. By strong duality/Slater condition for µ (see Proposition A.6) and the density function for µ GW given in Proposition 9.1 above, this implies the result.
Goemans-Williamson measure projected to the unit sphere does not maximize entropy
In this section we show that the Hermitian version of the measure µ GW is not a max-entropy measure for P 1 . We do not recompute the density function for µ GW in the Hermitian case, but only say that Proposition 9.1 can be adapted to show that in this case it is of the same form: ν(X) ∝ e − A 0 ,X for some positive definite A 0 . We want to "project" the (Hermitian) SDP rounding measure onto P 1 , and we want to compute the density with respect to µ 1 . To do this, we first project the Lebesgue measure onto the complex unit sphere S n C and then pushforward through x → xx * . We first state a few standard lemmas.
Lemma 9.5 Let f (z) be a Lebesgue measurable function on C n . Then: Proof: This is precisely the polar coordinates formula for Lebesgue measure in C n ∼ = R 2n . The constant 2π n (n−1)! is the volume of the complex unit ball in C n .
This shows that the projected density g can be computed from the Lebesgue density f as follows: We will now use the following lemma, which is standard. With this, we compute the following for f (x) ∼ e − A,xx * : That is, the projected density is proportional to A, vv * −n on the unit sphere. With this, we have the following interesting fact.
Proposition 9.7
The "projection" of the (Hermitian) SDP rounding measure to P 1 is not a maxentropy measure with respect to µ 1 on P 1 .
Proof: By strong duality, max-entropy densities in both contexts take the form g(X) ∝ e − A,X . So, we just need to show that for all PD B we have: This is straightforward, e.g. using the fact that the left-hand side is a rational function in ℜ(v i ), ℑ(v i ) but the right-hand side is not.
Generalization of the maximum entropy framework to Lie groups
Recent work (e.g., [10,8]) has demonstrated interesting connections between Lie theory and TCS, and the max-entropy framework fits into this context as well. In what follows we will briefly discuss the case of Ω = P k and µ = µ k , as well as how this can be generalized. However, a more detailed investigation of the computational aspects of the max-entropy framework in this context is outside the scope of this paper. We first describe the case of Ω = P k and µ = µ k in a more general way. The unitary group U (n) acts on the real vector space of n × n Hermitian matrices by conjugation. This group action partitions the vector space into orbits, with X and Y being in the same orbit if and only if they have the eigenvalues. Given any Hermitian matrix F , we denote the orbit corresponding to F by O(F ).
Consider now the matrix P k := diag (1, . . . , 1, 0, . . . , 0) where k denotes the number of 1s that appear in the matrix. Then the orbit O(P k ) is precisely the set of rank-k projections. That is, O(P k ) = P k , and so the unitarily invariant measure µ k on P k induces such a measure on O(P k ). In fact such a unitarily invariant measure µ F exists for any orbit O(F ) allowing us to extend our maximum entropy framework to such orbits of U (n).
This can be generalized beyond the group U (n), to the general setting of a Lie group G and its corresponding Lie algebra g upon which G naturally acts. The primal and dual programs for this generalized setting are the same as in the general case, with one exception. The element F ∈ g is now an input, and any algorithm for approximating an optimum for Dual µ F (A) will necessarily depend on the complexity of F . That said, strong duality holds in this case whenever A is in the interior of K = hull(O(F )) ⊂ g, and so the bounding box and the strong counting oracle are the two main results needed to obtain the polynomial-time ellipsoid-based algorithm described in this paper. As an aside, in this case K = hull(O(F )) is called an orbitope (e.g., see [34,4]).
Thus, the following optimization problem is a natural generalization of the (dual) maximum entropy problem considered in this paper. The G-invariant inner product used in the exponent here can be derived from the so-called Killing form of g when G is compact (e.g., see [29], Corollary 4.26).
Computability of this problem will be a subject of future work.
A.1 The dual formulation
The dual formulation Dual µ (A) is given as follows, for A ∈ K = hull(Ω) and Y in the ambient real inner product space R d : In Dual µ (A) we also assume a linear constraint on Y : L(Y ) = 0 where X ∈ Ω is such that L(X) = B. We ignore this constraint for now, and deal with it in Lemma A.1 below.
We now want to compute derivatives to connect this with the dual program. For any f ∈ L 2 (µ), we compute: This immediately implies (almost everywhere, and we will suppress this caveat from now on): log (ν(X)) + 1 + Y, X + z = 0.
Combining these observations:
Proof:
For any Y , consider the decomposition Y = Z + Z ⊥ where L(Z) = 0 and Z ⊥ , Y ′ for all Y ′ such that L(Y ′ ) = 0. Note that A ∈ Ω implies L(X − A) = 0 for all X ∈ Ω, and so Z ⊥ , X − A = 0. Letting F A (Y ) denote the dual objective, this implies: This completes the proof.
A.2 Strong duality under Slater's condition
We now prove a general result above obtaining strong duality from a Slater-type condition. In the next section, we show that this Slater-type condition holds for the max-entropy program in general. We also give more concrete proofs for µ k on P k and µ on V 1 in the following section.
i ,X dµ k (X) is a convex combination of measures in the interior of the constraints of Prim k (A), this proves the result for all A ∈ P k,ε . Letting ε → 0 then proves the result in full generality. Proposition A.6 (Slater's condition for V 1 ) Let A be in the interior of the PSD cone, and let µ be the pushforward of the Lebesgue measure m though x → xx ⊤ . Then there is a density function on the set of rank-one real symmetric PSD matrices which is in the interior of the constraints of Prim µ (A).
Proof: Let ν 0 (x)dm(x) be a Gaussian probability measure on R n with covariance matrix A. This precisely means: Let ν(X)dµ(X) be the pushforward of ν 0 (x)dm(x) through the map x → xx ⊤ . Then:
B The Schur-Horn theorem
We last discuss an idea that will useful to us in a number of parts of this paper. Generally, the idea is that the unitary invariance of µ k allows us to often restrict to looking at diagonal matrices when considering the dual objective. The main observation is a corollary of the famous Schur-Horn theorem [35,23]. Here, U (n) is the unitary group and S n is the subgroup of permutation matrices.
Proof:
Let σ 0 be the permutation matrix which minimizes σDσ * , D ′ over all permutation matrices. By majorization, for any U the diagonal of U DU * can be written as a convex combination of the permutations of the diagonal of D. By linearity of ·, D ′ , the value of U DU * , D ′ must then be at least the value of σ 0 Dσ * 0 , D ′ . Since the integration part of F A (Y ) is unitarily invariant, this is then equivalent to:
Since A is diagonal, this follows from the previous corollary.
C Closeness of the approximate distribution
Let µ 1 and µ 2 two probability measures on Ω, given as density functions with respect to a base measure µ. The KL divergence between µ 1 and µ 2 is defined as With this we follow the proof of Lemma A.4 in [36] to obtain the following.
Lemma C.1 Let Y ⋆ be the optimal solution to the dual objective function If µ ⋆ and µ • are the probability distributions associated to Y ⋆ and Y • respectively, then The density functions of the distributions associated to Y ⋆ and Y • can be given as .
Since Y ⋆ is the optimal solution (and hence A is proportional to Ω Xe − Y ⋆ ,X dµ(X)), we can compute the KL divergence as As in Corollary A.5 of [36], we use the previous result to obtain bounds on the approximate optimal distribution and on the marginals of this distribution.
Corollary C.2 Let Y ⋆ be the optimal to the dual objective function F A (Y ) with domian Ω and measure µ as in the previous lemma, and let Y • be such that F A (Y • ) ≤ F A (Y ⋆ ) + ε. If µ ⋆ and µ • are the probability distributions associated to Y ⋆ and Y • respectively, then Proof: The result follows from the previous lemma and the following well-known inequality (see e.g. [12], Lemma 12.6.1, pp. 300-301) relating KL divergence and total variation distance: | 2020-04-17T01:00:58.975Z | 2020-04-16T00:00:00.000 | {
"year": 2020,
"sha1": "f5a19ebbc06cbc420291c7d108dd51ff4e388b86",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3357713.3384302",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f5a19ebbc06cbc420291c7d108dd51ff4e388b86",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
8043337 | pes2o/s2orc | v3-fos-license | Spatial Heterogeneity of Soil Chemical Properties in a Subtropical Karst Forest, Southwest China
This study evaluates the spatial heterogeneity of the soil chemical properties of surface soils across a 1 ha old-growth subtropical karst forest in southwest China.
Introduction
Soils are formed by physical, chemical, and biological processes that act upon the geological parent material and the continuous interaction of these processes with the biotic, climatic, and topographic components of the environment [1,2]. These components often cause the heterogeneity in soil properties at the spatial and temporal scales [3][4][5]. Previous studies investigated the relationships between the spatial heterogeneity of soil properties and the environmental factors in different landscape types [1,[6][7][8][9]. Many studies demonstrated that topographic factors, such as slope percentage, aspect, elevation, and microrelief, play important roles in determining the spatial variation in soil properties at different scales [10,11]. However, the spatial variability of soil properties and the complex relationship between soil and topographic factors in different ecosystems or landscape types remain barely understood [12,13]. Soil properties, such as key factors, affect plant distribution, community dynamics, and even the structure and function of ecosystems [14,15]. Thus, analyses on the spatial heterogeneity of soil properties in different plant communities can contribute well to the understanding of the structure and function of soil. In addition, these analyses can explore the relationship between soil properties and plant diversity in an ecosystem.
A karst ecosystem is defined as an ecosystem that is restrained by a karst environment [16], especially by karst geological settings [17]. Carbonate rock is the material basement of a karst ecosystem, and its matter migration and energy transfer have their own particularities, such as soluble rock, calcium-rich, and double-layer hydrogeological structures, which are different from other ecosystems within the same climate zone [18]. The karst ecosystem in China mainly covers the southwestern regions, such as Guizhou, Yunnan, and Guangxi. A karst topography is a geological formation shaped by the dissolution of a layer or layers of soluble bedrocks, which are usually carbonate rocks, such as limestone or dolomite [16]. The existence of special landforms, such as the peak cluster, peak forest, low-lying land, and the funnel, cause significant changes in the topographic factors, such as elevation, slope, and aspect [19]. Moreover, various micro-reliefs, such as stone facing, stone trench, and swallet, formed by abundant rock outcrops dramatically influence the small-scale habitat heterogeneity [5]. Therefore, topographic factors possibly have an important influence on the spatial variability of soil properties in the karst region. However, studies on the spatial variability of soil properties in karst hill slopes and its relationship with topographic factors remain unavailable.
A karst mixed evergreen-deciduous broadleaved forest was chosen in the Maolan National Nature Reserve (MNNR) in southwest China because of its rich biological diversity and diverse topography. In the present study, the spatial 2 The Scientific World Journal heterogeneity of pH and soil nutrients in surface soils (0 cm to 10 cm deep) across 1 ha (100 m × 100 m) of old-growth subtropical karst forest was evaluated. The spatial variability and the spatial patterns of the soil chemical properties at the plot scale were characterized using a combination of classical and geostatistical methods. The objectives of this study are as follows: (1) to determine the spatial variability characteristics of the soil chemical properties of a subtropical karst forest and (2) to examine the correlations between the spatial distribution of the soil chemical properties and the local topographic variables.
Site Description.
The study area is located in the MNNR (25 ∘ 09 20 to 25 ∘ 20 50 N, 107 ∘ 52 10 to 108 ∘ 05 40 E), Libo County, Guizhou province, in southwest China ( Figure 1). The reserve is approximately 200,000 ha in size and has an elevation in the range from 430 m to 1,078.6 m, with an average of 800 m. A subtropical monsoon climate dominates the area with a mean annual rainfall of 1,320.5 mm. The mean temperature ranges from 8.3 ∘ C in January to 26.4 ∘ C in July, with an annual mean of 15.3 ∘ C. The annual evaporation is 1,343.6 mm, and the annual mean relative humidity is 83%. Carbonate rocks are usually exposed on the surface, and the soils are thin and discontinuous in the study area. The shallow, black limestone soil is rich in organic matter and nutrients (N, P, K, and Ca).
Soil Sampling and
Measurements. The 1 ha (100 m × 100 m) plot was established in a typical old-growth mixed evergreen-deciduous broadleaved forest in the MNNR in the summer of 2008. The plot is located at a steep southeastfacing hill slope (mean slope of ca. 45 ∘ ) from the valley bottom to the hilltop at an elevation in the range from 835 m to 912 m. Rock outcrops occur on almost the entire plot (ca. 85% of the ground surface). Castanopsis carlesii var. spinulosa, Cyclobalanopsis myrsinifolia, Platycarya longipes, Distylium myricoides, Rhododendron latoucheae, Osmanthus fragrans, Engelhardtia roxburghiana, Sloanea sinensis, and Carpinus pubescens dominate the vegetation at the plot. Using the DQL-1 forest compass (Harbin Optical Instrument Factory, China), the plot was divided into 100 contiguous 10 m × 10 m quadrats. Soil samples at depths in the range from 0 cm to 10 cm at three locations were chosen randomly within each 10 m × 10 m quadrat. The three soil samples in each quadrat were then mixed, and therefore a total of 100 soil samples were collected from the plot. Moreover, the elevation, slope degree and aspect, and the percentage of rock bareness were recorded. Elevation was measured using a portable GPS (GPSMAP 60CSx, Garmin Ltd., Taiwan, China). The slope degree and aspect were measured using a DQL-1 Forest Compass. The percentage of rock bareness (basement rock was not covered by soil and was exposed on the ground surface) within each 10 m × 10 m quadrant was visually estimated.
All soil samples were transported to the laboratory for chemical analysis. Each soil sample was air-dried and passed through a 2 mm sieve to separate fine earth and coarse soil fractions. All subsequent analyses were performed on the fine fractions. Ten chemical properties were analyzed according to the methods described in Bao [20]. Soil pH was measured in a 1 : 2.5 soil-to-water suspension. Soil organic matter (OM) was measured using the K 2 Cr 2 O 7 -capacitance method, total nitrogen (TN) was measured using the micro-Kjeldahl method, total phosphorus (TP) was measured using NaOH fusion and Mo-Sb colorimetric procedures, total potassium (TK) was measured using NaOH fusion and flame photometry, and total calcium (TCa) and total magnesium (TMg) were measured using atomic absorption spectrometry. The available nitrogen (AN) in the soil was determined using the diffusion-absorption method. Available phosphorus (AP) was extracted using NaHCO 3 solution and its content was determined using the Mo-Sb colorimetric method. Available potassium (AK) was extracted with neutral ammonium acetate and was measured using flame photometry.
Data Analysis.
Both the classical statistics and geostatistics were used to analyze the spatial features of the measured variables. Conventional statistics was used to indicate the degree of overall variation, while geostatistics was used to examine whether or not a variable is spatially structured.
The normality of all data sets was tested prior to the conventional statistical and geostatistical analyses. Data were log or square-root transformed when the normality test failed. Conventional statistics, that is, mean (median for skewed data), standard deviation (SD), and coefficient of variation (CV), was performed to indicate the overall variability of each analyzed item. Spearman correlation coefficients were used to determine the relationships between soil chemical properties and topography factors, such as elevation, slope degree, slope aspect, and rock bareness rate. All the analyses above were performed using the statistical software package SAS (version 9, SAS Institute Inc, Cary, NC, USA).
The Scientific World Journal 3 The spatial patterns of the measured variables were analyzed using GS + software (version 5.3.2, Gamma Design Software, Plainwell, MI, USA) for semivariogram computation and kriging. This analysis produces variograms that reveal random and structured aspects of the spatial dependence in a dataset of multiple samples collected at increasing distances from each other (the lag interval). The variogram plots of the semivariance statistic (ℎ) for a range of distance intervals ℎ can be expressed as follows: where (ℎ) is the semivariance, (ℎ) is the number of observation pairs separated by a distance ℎ, ( ) is the value of the variable of interest at location , and ( + ℎ) is the value of the variable of interest at a location at distance ℎ from .
In this study, three geostatistical models, namely, spherical, exponential, and linear models, were used to evaluate the resulting semivariograms. Three semivariogram parameters, namely, the nugget ( 0 ), the range, and the ratio of structure variance ( ) and sill variance ( 0 + ) (SH%, hereafter), were derived and used in the analysis. 0 reflects either the variability at scales finer than the data resolution or the random error. The range indicates the spatial autocorrelation distance between data pairs. SH% represents the proportion of variance caused by spatial dependence. Semivariogram with a high SH% indicates a strong spatial structure [21,22]. Maps of the soil properties were produced with GS + software, following the ordinary block kriging with a block size of 2 m × 2 m. The transformed data (logarithm or square root) prior to the semivariogram analysis were converted back to their original units prior to kriging [23].
Results and Discussion
3.1. Descriptive Statistics. The data were analyzed using classical statistical methods to understand the characteristics of the general soil properties prior to the investigation on the spatial structure ( Table 1). The minimum, maximum, difference between median and average, SD, and CV can describe the variability of a soil property. The results of the Kolmogorov-Smirnov test (K-S test) ( = 0.05 probability level) indicate that the soil properties data were distributed normally, except for TN, TCa, and AK, and that these variables were subjected to log transformation ( Table 1).
The mean surface soil pH was slightly alkaline. The CV for the soil properties was in the range of 5%-65%. AP was the most variable among all the soil properties (CV = 63.9%). TP, TCa, and AK were also highly variable. Nielsen and Bouma [24] suggested the following three distinct classes of variability for soil properties based on CV values: 0%-15% indicates little variability; 10%-100% indicates moderate variability; and >100% indicates high variability. In the current study, the soil variability data in Table 1 show that all the CVs, except for soil pH, were between 10% and 65%. These data indicate that soil chemical properties in the study site were moderately variable at the local scale. Many studies reported that soil pH is the least variable soil property [25]. The current study also showed that soil pH in the karst forest has low variability (CV = 5.5%). Similar results were found in other karst forest types [9,26]. This moderate variability of soil properties may be attributed to the soil processes of eluviation in karst habitats [26].
Spatial Variability of Soil Chemical Properties.
The semivariogram model and some of the geostatistical parameters of soil chemical properties are shown in Table 2. A spherical model provided a significant fit (based on largest 2 value) to the semivariogram of TP, TK, AN, AP, and AK (Table 2, Figure 2). An exponential model was selected as the best-fit model for the pH and TMg based on the regression analysis ( 2 ). A linear model provided the best fit to the variogram of OM, TN, and TCa. In geostatistics, mathematical models are fitted to variograms, and different models reveal the nature of the spatial pattern. For example, spherical models indicate distinct patches of large (or small) concentrations in a matrix of less (or greater) concentrations [13]. The observation that several models provide the best fit in our data indicates different spatial patterns among the set of soil properties.
The range in a variogram is the distance at which the variogram reaches the sill. The range of the semivariogram represents the average distance through which the variable semivariance reaches its peak value. The variability beyond this range does not depend on the separation distance, and the variables are no longer spatially related. The range may be viewed as the zone of influence of a variable or the transition from a state of spatial correlation to a state of absence correlation [27]. Moreover, the range is a direct measurement of the scale of spatially correlated variation. The scale of the correlated spatial variation increases with increasing range. A small, effective range implies a distribution pattern composed of small patches. In this study, the range of spatial dependence for soil chemical properties in the 0 cm to 10 cm depth ( Table 2) was as small as 45.6 m for AN and 50 m to 90 m for a host of soil chemical properties. The effective ranges for pH, TMg, and AK were greater than 200 m, indicating a largepatched distribution pattern.
The C 0 values show a positive nugget effect, which may be explained by the sampling error, short range variability, and random and inherent variability [28]. The difference in the degrees of structured spatial variation is indicated by the variations in the nugget effect and in the sill. The structural variance, expressed as a percentage of the total variance, allows the direct comparison of the relative strength of spatial dependence [29]. The soil variables have the following three distinct classes of spatial dependence: a ratio <25% indicates weak spatial dependence; a 25%-75% ratio indicates moderate spatial dependence; and a ratio >75% indicates strong dependence [30]. In the current study, the proportion of total variance that was spatially dependent (defined as C 0 /C 0 + C) varied from <25% for TP, TK, AP, and AK, indicating that these variables have a strong spatial autocorrelation and suggesting that the parent material, terrain, and the climate are the main causes that determine strong spatial autocorrelation. The ratios of pH, TMg, and AN were 4 The Scientific World Journal between 25% and 75%. The variables had moderate spatial autocorrelations, which may be controlled by the intrinsic variations in soil characteristics (texture, mineralogy, and soil forming processes) and the effects of natural vegetation on soil [31]. These results show the different degrees of spatial variability of soil properties in karst forest soils. Different soil chemical properties occupied varied spatial autocorrelation ranges, suggesting that the spatial heterogeneity of the soil chemical properties is possibly a function of the spatial scale.
Spatial Distribution of the Soil Chemical Properties.
Using the fitted models and the corresponding parameters, a block kriging with a block size of 2 m × 2 m was performed to obtain interpolated values for all variables throughout the plot (Figure 3). Mapping the variations in the concentration of each chemical property across the study area revealed several spatially explicit patterns. For example, pH, TCa, and TMg contained large patches of higher concentrations in the middle and upper parts of the study plot, whereas patches of higher TP, TK, AN, AP, and AK concentrations were prevalent in the lower portion of the plot. OM and TN had similar spatial distributions, with higher concentrations prominent in the center of the study area. In the heterogeneous karst landscapes, the topography changed with the functions of soil nutrient, temperature, and moisture [32,33]. In the study site, the soil properties were probably determined by the topographic factors, such as elevation, slope, and more rock outcrops.
Correlations between the Soil Chemical Properties and the
Topographic Factors. Table 3 shows the correlations between soil properties and topographic factors in the karst forest plot. Topographic variability showed several significant correlations with different soil chemical properties (Table 3). Soil pH, TCa, and TMg showed significant and positive correlations with elevation, slope, and rock bareness rate. On the other hand, TP, TK, and AK were negatively correlated with the elevation and slope. OM and TN were positively correlated with the slope and rock bareness rate. Topography is an important factor that controls both hydrological and soil processes at the landscape scale because it affects the moisture, as well as the accumulation and export of nutrients [10,11,34,35]. Some studies in the Guangxi karst regions showed a significant topographic influence on soil nutrients [36]. In the current study, the distribution of soil chemical properties in the forest plot was closely related to the topographic factors, of which the maximum values of pH, TCa, and TMg were found in the middle and upper parts of the slope of the plot and had significant positive correlation with elevation, slope, and the rock bareness rate. The middle and upper slopes present the high rock bareness rate and thin soil layer, and, thus, the weathering and eluviations of the rock contributed to the increases in soil pH, Ca, and Mg [37]. Previous studies showed that the pH value of soil is significantly correlated with the rock bareness rate [38] and that high soil pH tends to increase with increasing Ca and Mg contents [39], which is in accordance with the analytical
8
The Scientific World Journal results of the current study. The positive correlation of OM and TN with the slope and rock bareness rate should be interpreted in such a way that the soil in the rock outcrop areas often clusters in the stone trench or swallet, which are favorable for accumulating the organic matter. Zhang et al. [40] proposed that sites with high rock bareness rate and sharp slope often contain high soil nutrient, as also revealed in the results of the current study. TP, TK, and AK have an extremely significant negative correlation with the elevation and slope. The sites with low altitude and slight slope often have a low rock bareness rate and a thick soil layer, allowing them to prepare for the gradual accumulation of TP, TK, and AK. Thus, it can be concluded that pH, OM, TN, TCa, and TMg increase, while TP, TK, AN, and AK decrease with increasing elevation and the rock bareness rate. Therefore, the current study demonstrated that topographic factors, such as elevation, slope, and rock bareness rate, play important roles in determining the spatial distribution and the variability of soil chemical properties in the subtropical karst forest in southwest China. However, topography, biology, and climate have synergistic effects on the spatial variability of soil nutrients [31]. The formation and evolution of soil fertility in forest ecosystems are significantly influenced by the biological functions of the ecosystems. The formation and sustainability of soil nutrients are based on the exuberant biological accumulation, and, in turn, the spatial variability of soil nutrients plays a part in the growth and spatial distribution of plants. Therefore, this study provided a basis for future investigations on soil-plant relationships and the ecological restoration of the degraded ecosystem in the karst region of southwest China.
Conclusions
In the current study, geostatistical methods were used to investigate the spatial heterogeneity of soil chemical properties under an old-growth subtropical karst forest in southwest China. The results of the classic statistical analysis indicate that the soil chemical properties in the study site are moderately variable at the local scale. Best-fit models to individual variograms include spherical, exponential, and linear models, indicating the different spatial patterns among the set of soil properties. A geostatistical analysis revealed spatial dependence at a scale from 10 m to 100 m for most of the soil properties. The nugget ratios of TP, TK, AP, and AK showed strong spatial autocorrelations (with the nugget/sill ratio < 0.25), indicating that the parent material, topography, and the climate are the main causes of the spatial correlations. The ratios of pH, TMg, and AN were between 25% and 75% and had moderate spatial autocorrelations, which may be controlled using the intrinsic variations in soil characteristics and the effects of natural vegetation on soil. The soil chemical properties had significantly high correlations with topography, indicating that the topographic factors, especially elevation, slope, and rock bareness rate, mainly affected the spatial distributions and variability of the soil chemical properties of a subtropical karst forest in MNNR, southwest China. | 2016-05-18T05:03:26.509Z | 2014-02-12T00:00:00.000 | {
"year": 2014,
"sha1": "af32b333c8542c15a33515ad5e0ab2f599c84e29",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/473651.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ccfb01e8f15a2368d4fe6bc7d595a51f3810d6f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259336905 | pes2o/s2orc | v3-fos-license | Efficient Removal of Carcinogenic Azo Dyes from Water Using Iron(II) Clathrochelate Derived Metalorganic Copolymers Made from a Copper-Catalyzed [4 + 2] Cyclobenzannulation Reaction
A novel synthetic strategy is disclosed to prepare a new class of metalorganic copolymers that contain iron(II) clathrochelate building blocks by employing a mild and cost-effective copper-catalyzed [4 + 2] cyclobenzannulation reaction, using three specially designed diethynyl iron(II) clathrochelate synthons. The target copolymers CBP1-3 were isolated in high purity and excellent yields as proven by their structural and photophysical characterization, namely, Fourier transform infrared (FTIR), X-ray photoelectron spectroscopy (XPS) and UV–VIS absorption and emission spectroscopies. The thermogravimetric analysis (TGA) of CBP1-3 revealed an excellent chemical stability. Investigation of the adsorption properties of the target copolymers towards the carcinogenic methyl red dye from aqueous solution revealed a quantitative uptake in 30 min. Isothermal adsorption studies disclosed that methyl red uptake from aqueous solution followed the Langmuir model for all of the target copolymers, reaching a maximum adsorption capacity (qm) of 431 mg g−. Kinetic investigation revealed that the adsorption followed pseudo-first-order with an equilibrium adsorption capacity (qe,cal) of 79.35 mg g− and whose sorption property was sustained even after its reuse several times.
Introduction
Dyes are predominantly utilized as coloring agents in the textile industry and are employed in a myriad of products, such as pharmaceuticals, food and beverage, leather, plastics, cosmetics, and paper [1,2]. Dyes, whose global annual production is estimated to 7 × 10 7 tons [2,3], present several advantages, namely, the variety of their color palette, ease of application on various types of materials, structural diversity, and low energy consumption [4,5]. There are several methods to classify synthetic dyes, notably that are based on the chemical structure of their chromophore, hence, they could be grouped as acidic, basic, azoic, nitro, sulphur, etc. [6,7]. It is noteworthy that azo dyes, which are characterized by the presence of one or more azo groups (-N=N-), are widely employed in industry, with a production rate that exceeds half that of the total dyes synthesized annually [8,9]. Despite its undeniable importance and contribution to economic development [5,10], the textile industry is one of the largest global polluters as it consumes large amounts of fuels and chemical reagents [11][12][13]. Additionally, the textile industry uses massive quantities of freshwater in the various operations required for its production chain,
Synthesis of the Prototypical Monomer CBM
As a proof of concept, the prototypical iron(II) clathrochelate monomer CBM was prepared using the copper-catalyzed [4 + 2] cyclobenzannulation reaction conditions, where the diethynyl-containing iron(II) clathrochelate synthon CM2 was reacted with two equivalents of 2-((4-(tert-butyl)phenyl)ethynyl) benzaldehyde 5 in the presence of copper(II) triflate and trifluoracetic acid (TFA) in refluxing dichloroethane overnight, affording CBM in a quantitative yield (Scheme 2). The structure of the latter was confirmed by 1 H-and 13 C-nuclear magnetic resonance (NMR), ESI-MS, and FTIR spectroscopy (Figures 1, S5, S10, S14 and S18). Figure 1). Likewise, the chemical shifts ob- cyclohexyl groups (c.f. peaks labeled a, b in Figure 1). Likewise, the chemical shifts observed at 1.25 ppm are assigned to the methyl (-CH3) protons of the tertiary butyl group (Figure 1). 13 C-NMR spectrum of CBM displays all of the expected aromatic peaks in the range of 152.4-125.0 ppm in addition to the chemical shifts of the aliphatic carbons of both the cyclohexyl and tertiary butyl groups at 34.9 ppm, 31.6 ppm, 26.7 ppm, and 22.1 ppm ( Figure S10). In addition, the high purity of CBM was confirmed by electrospray ionization mass spectrometry (ESI-MS, Figure S14). Figure S10). In addition, the high purity of CBM was confirmed by electrospray ionization mass spectrometry (ESI-MS, Figure S14).
Synthesis of Copolymers CBP1-3
Synthesis of the target copolymers CBP1-3 (Scheme 3) was carried out using reaction conditions similar to those employed to make the prototypical monomer CBM described in Scheme 2. The copper-catalyzed [4 + 2] cyclobenzannulation reaction of the diethynyl iron(II) clathrochelate derivatives CM1-3 and 2,5-bis(phenylethynyl)terephthalaldehyde 6 afforded the target copolymers CBP1-3 in excellent yields in the range of 83-95% (Scheme 3). Table 1 summarizes the attempts carried out to optimize the copolymerization reaction conditions: when a 2.5 × 10 −2 M solution of 2,5-bis(phenylethynyl)terephthalaldehyde 6 and an equimolar amount of CM1 are reacted in refluxing dichloroethane (DCE) in the presence of Cu(OTf) 2 and TFA for 48 h, and CBP1 was isolated as an insoluble solid in 48% yield in Scheme 2. The copper-catalyzed [4 + 2] cyclobenzannulation reaction of the diethynyl iron(II) clathrochelate derivatives CM1-3 and 2,5-bis(phenylethynyl)terephthalaldehyde 6 afforded the target copolymers CBP1-3 in excellent yields in the range of 83-95% (Scheme 3). Table 1 summarizes the attempts carried out to optimize the copolymerization reaction conditions: when a 2.5 × 10 −2 M solution of 2,5-bis(phenylethynyl)terephthalaldehyde 6 and an equimolar amount of CM1 are reacted in refluxing dichloroethane (DCE) in the presence of Cu(OTf)2 and TFA for 48 h, and CBP1 was isolated as an insoluble solid in 48% yield (Table 1, entry 1). Thus, to improve the reaction yield, the concentration of comonomers CM1 and 6 was diluted to a molar concentration of 1.25 × 10 −2 M, which afforded CBP1 in 65% (Table 1, entry 2). Further dilution of the comonomers to a concentration of 6 × 10 −3 M resulted in the improvement of the reaction yield, affording CBP1 in 83% (Table 1, entry 3).
Scheme 3. Synthesis of copolymers CBP1-3.
Similar reaction conditions were employed in the copolymerization of 6 in the presence of an equimolar amount of either CM2 or CM3, thus affording CBP2 and CBP3 in 95% and 90% yields, respectively (Table 1, entry 4 and 5). CBP1-3 was characterization by various techniques, namely, FTIR, XPS, TGA, UV-VIS absorption, and emission spectroscopies (Figures 2-5, S19-S21, S22, S23 and S25). Nevertheless, the target copolymers were found to be insoluble in common organic solvents, such as THF, DCM, DMSO, methanol, acetone, and chloroform, which prevented their molar mass determination. Figure 2 portrays the comparative FTIR absorption spectra for comonomer CM1 and its corresponding target copolymer CBP1. The characteristic stretching vibrations of the ethynyl (C≡C) group were detected at ~2211 cm −1 [46] for CM1, which disappeared from the spectrum of copolymer CBP1. It is noteworthy that the absorption bands identified at ~2955 cm −1 and ~1461 cm −1 correspond to the distinctive aliphatic C-H stretching and bending vibrations, respectively, which clearly indicates the presence of the butyl groups in Similar reaction conditions were employed in the copolymerization of 6 in the presence of an equimolar amount of either CM2 or CM3, thus affording CBP2 and CBP3 in 95% and 90% yields, respectively (Table 1, entry 4 and 5).
CBP1-3 was characterization by various techniques, namely, FTIR, XPS, TGA, UV-VIS absorption, and emission spectroscopies (Figures 2-5, S19-S21, S22, S23 and S25). Nevertheless, the target copolymers were found to be insoluble in common organic solvents, such as THF, DCM, DMSO, methanol, acetone, and chloroform, which prevented their molar mass determination. in CBP1 [47,48]. In addition, the fingerprint stretching vibrations were confirmed for each of the B-O (1396 cm −1 ), B-C (1180 cm −1 ), and aromatic C-H (820 cm −1 ) bending vibration peaks, which further supports the formation of target copolymer CBP1 [34,49]. Likewise, the FTIR absorption spectra of target copolymers CBP2,3 revealed their distinctive stretching and bending vibration peaks, which corroborated their successful formation (Figures 2 and S19-S21). X-ray photoelectron spectroscopy (XPS) survey-scan spectra of CBP1-3 confirm the presence of all of their constituting elements. The C1s, O1s, and N1s binding energies were detected in the range of ~284.8-284.7 eV, 532.2 eV, and 400.5-400.1 eV, respectively, whereas those for B1s and Fe2p were revealed in the range of 191.1-191.0 eV and 708.9-722.1 eV, respectively (Figures 3, S22 and S23) [34]. Interestingly, the UV-VIS absorption and emission spectra of the target copolymers CBP1-3 displayed similar features. The butyl-and cyclohexyl-containing iron(II) clathrochelate cyclobenzannulated copolymers CBP1,2 revealed a similar UV absorption band at ~312, whereas the one with phenyl side groups, i.e., CBP3, displayed a strong absorption band at 297 ( Figure 5). The emission spectra of CBP1,2 portrayed a broad peak with an intensity maximum at 442 nm, while the phenyl-containing copolymer CBP3 disclosed an emission maximum at 451 nm ( Figures 5 and S25). Interestingly, the UV-VIS absorption and emission spectra of the target copolymers CBP1-3 displayed similar features. The butyl-and cyclohexyl-containing iron(II) clathrochelate cyclobenzannulated copolymers CBP1,2 revealed a similar UV absorption band at ~312, whereas the one with phenyl side groups, i.e., CBP3, displayed a strong absorption band at 297 ( Figure 5). The emission spectra of CBP1,2 portrayed a broad peak with an intensity maximum at 442 nm, while the phenyl-containing copolymer CBP3 disclosed an emission maximum at 451 nm ( Figures 5 and S25). The porosity of the target copolymers CBP1-3 was investigated using nitrogen adsorption-desorption experiments at 77 K and low relative pressure (Figures S26-S28). The Brunauer-Emmett-Teller (BET) method revealed a surface area of 74.0 m 2 g -1 for the cyclohexyl-containing copolymer CBP2, whereas those with butyl-and phenyl-side groups, i.e., CBP1 and CBP3, showed lower BET surface areas of 7.0 m 2 g -1 and 35.0 m 2 g -1 , respectively. The pore volumes of CBP1-3 derived from these isotherms disclosed values of 0.014 cm 3 g -1 , 0.081 cm 3 g -1 , and 0.030 cm 3 g -1 , respectively. Figure 2 portrays the comparative FTIR absorption spectra for comonomer CM1 and its corresponding target copolymer CBP1. The characteristic stretching vibrations of the ethynyl (C≡C) group were detected at~2211 cm −1 [46] for CM1, which disappeared from the spectrum of copolymer CBP1. It is noteworthy that the absorption bands identified at~2955 cm −1 and~1461 cm −1 correspond to the distinctive aliphatic C-H stretching and bending vibrations, respectively, which clearly indicates the presence of the butyl groups in CBP1 [47,48]. In addition, the fingerprint stretching vibrations were confirmed for each of the B-O (1396 cm −1 ), B-C (1180 cm −1 ), and aromatic C-H (820 cm −1 ) bending vibration peaks, which further supports the formation of target copolymer CBP1 [34,49]. Likewise, the FTIR absorption spectra of target copolymers CBP2,3 revealed their distinctive stretching and bending vibration peaks, which corroborated their successful formation (Figures 2 and S19-S21).
X-ray photoelectron spectroscopy (XPS) survey-scan spectra of CBP1-3 confirm the presence of all of their constituting elements. The C1s, O1s, and N1s binding energies were detected in the range of~284.8-284.7 eV, 532.2 eV, and 400.5-400.1 eV, respectively, whereas those for B1s and Fe2p were revealed in the range of 191. .0 eV and 708.9-722.1 eV, respectively (Figures 3, S22 and S23) [34]. Figure 4 illustrates the thermogravimetric analysis (TGA) of copolymers CBP1-3 depicting their 10% weight loss temperatures in the range of 200-310 • C, which indicates their relatively high thermal stability.
Interestingly, the UV-VIS absorption and emission spectra of the target copolymers CBP1-3 displayed similar features. The butyl-and cyclohexyl-containing iron(II) clathrochelate cyclobenzannulated copolymers CBP1,2 revealed a similar UV absorption band at~312, whereas the one with phenyl side groups, i.e., CBP3, displayed a strong absorption band at 297 ( Figure 5). The emission spectra of CBP1,2 portrayed a broad peak with an intensity maximum at 442 nm, while the phenyl-containing copolymer CBP3 disclosed an emission maximum at 451 nm ( Figures 5 and S25).
Methyl Red Adsorption Studies
Copolymers CBP1-3 were tested as adsorbents of the cancerogenic and mutagenic azo dye methyl red (MR) purchased from Merck ® (CAS 493527). The uptake capacity was evaluated by soaking an aliquot of CBP1-3 in an aqueous solution of MR (Figures 6, S29 and S30). The removal efficiency of MR by copolymers CBP1-3 was investigated by recording the UV-VIS absorbance spectra of the dye's aqueous solutions at different time intervals. The dye adsorption experiments were carried out by stirring a 5 mg sample of a given copolymer in a 5 mL aqueous solution of MR (20 mg L −1 , pH = 7) at an ambient temperature. The adsorption efficiency, E (%), and amount of dye adsorbed by the copolymer, q e (mg g −1 ), were calculated using the following equations [25]: where C 0 and C e are the initial and equilibrium dye concentrations (mg L −1 ), respectively; m (g) is the quantity of the adsorbent used; and V (L) is the volume of dye solution.
The absorbance maximum peak intensity of MR detected at 430 nm noticeably decreased upon the addition of the target polymers CBP1-3 to the solution, which confirmed the latter to be very good adsorbents. Interestingly, all of the copolymers reached 100% adsorption capacity of MR, but at different time intervals, with CBP3 being the fastest by quantitatively removing MR from the aqueous solution in 30 min at room temperature (Figures 6 and S29, S30). It is worthwhile to note that the experiments were run three times and the adsorption values were reproducible. To better comprehend the adsorption behavior of copolymers CBP1-3, the adsorption isotherms of the MR removal experiments were obtained by preparing different aqueous solutions of MR with initial concentrations ranging from 50 to 600 mg L -1 , where Langmuir and Freundlich linear isotherm models were employed to fit the adsorption isotherm data. In the case of the Langmuir isotherm model, the following linear equation was utilized [25]: On the other hand, the linear equation below was employed for the Freundlich isotherm model [25]: Log qe = Log KF + 1/n Log Ce where qe (mg g -1 ) denotes the equilibrium adsorption capacity, Ce (mg L -1 ) represents the equilibrium dye concentration, and qm (mg g -1 ) indicates the maximum adsorption capacity. KL is the Langmuir constant, whereas KF and n are Freundlich constants correlated to the sorption capacity and sorption intensity, respectively ( Figures S31-S33). The Langmuir parameters were obtained by plotting the graph of 1/qe versus 1/Ce, and those for Freundlich were derived from the plot of log qe versus log Ce (Figures S31-S33). Both models were used to fit the equilibrium data obtained for the MR adsorption. It is The absorbance maximum peak intensity of MR detected at 430 nm noticeably decreased upon the addition of the target polymers CBP1-3 to the solution, which confirmed the latter to be very good adsorbents. Interestingly, all of the copolymers reached 100% adsorption capacity of MR, but at different time intervals, with CBP3 being the fastest by quantitatively removing MR from the aqueous solution in 30 min at room temperature ( Figures 6, S29 and S30). It is worthwhile to note that the experiments were run three times and the adsorption values were reproducible.
To better comprehend the adsorption behavior of copolymers CBP1-3, the adsorption isotherms of the MR removal experiments were obtained by preparing different aqueous solutions of MR with initial concentrations ranging from 50 to 600 mg L −1 , where Langmuir and Freundlich linear isotherm models were employed to fit the adsorption isotherm data. In the case of the Langmuir isotherm model, the following linear equation was utilized [25]: 1/q e = 1/K L q m × 1/C e + 1/q m On the other hand, the linear equation below was employed for the Freundlich isotherm model [25]: Log q e = Log K F + 1/n Log C e where q e (mg g −1 ) denotes the equilibrium adsorption capacity, C e (mg L −1 ) represents the equilibrium dye concentration, and q m (mg g −1 ) indicates the maximum adsorption capacity. K L is the Langmuir constant, whereas K F and n are Freundlich constants correlated to the sorption capacity and sorption intensity, respectively ( Figures S31-S33). The Langmuir parameters were obtained by plotting the graph of 1/q e versus 1/C e , and those for Freundlich were derived from the plot of log q e versus log C e (Figures S31-S33). Both models were used to fit the equilibrium data obtained for the MR adsorption. It is worthwhile to note that the correlation coefficient (R 2 ) derived from the linear equation using the Langmuir model was found to be higher than that computed for the Freundlich isotherm model of MR (Figure S33), thus implying that the Langmuir isotherm is a more favorable model to illustrate the equilibrium data, and which suggests a homogenous adsorption and the formation of monolayers of MR dye on the adsorbates CBP1-3. Additionally, the maximum adsorption capacity (q m ) derived from the Langmuir model was found to be 199.20 mg g −1 and 219.8 mg g −1 for CBP2 and CBP1, respectively, and it reached 431.03 mg g −1 for the phenyl-bearing iron(II) clathrochelate cyclobenzannulated copolymer CBP3, which, to the best of our knowledge, was superior to the adsorption capacity values for most of the materials reported in the literature [5,24,50].
Linear and nonlinear pseudo-first-and pseudo-second-order kinetic experiments were carried out to better understand the adsorption mechanism of MR on CBP3 using an initial concentration of 500 mg L −1 of MR dye at different time intervals every 15 min, up to 150 min (Figure 7). favorable model to illustrate the equilibrium data, and which suggests a homogenous adsorption and the formation of monolayers of MR dye on the adsorbates CBP1-3. Additionally, the maximum adsorption capacity (qm) derived from the Langmuir model was found to be 199.20 mg g -1 and 219.8 mg g -1 for CBP2 and CBP1, respectively, and it reached 431.03 mg g -1 for the phenyl-bearing iron(II) clathrochelate cyclobenzannulated copolymer CBP3, which, to the best of our knowledge, was superior to the adsorption capacity values for most of the materials reported in the literature [5,24,50].
Linear and nonlinear pseudo-first-and pseudo-second-order kinetic experiments were carried out to better understand the adsorption mechanism of MR on CBP3 using an initial concentration of 500 mg L −1 of MR dye at different time intervals every 15 min, up to 150 min (Figure 7). The linear equation that is detailed below was employed to investigate the pseudofirst-order model [34]: ln (qe -qt) = ln qe -k1t Alternatively, the linear pseudo-second-order model is expressed by the following [34]: t/qt = t/qe + 1/k2qe 2 Additionally, in order to circumvent possible erroneous correlations by the linear equations above, nonlinear correlations were also employed to check the pseudo-firstorder and the pseudo-second-order models using the respective equations [51,52]: qt = qe(1 − e −k 1 t ) and Figure 7. Kinetic modelling of MR by CBP3 using linear pseudo-first-order (a) second-order (b) and nonlinear pseudo-first-order s(c) and second-order (d) kinetic models.
The linear equation that is detailed below was employed to investigate the pseudofirst-order model [34]: ln (q e − q t ) = ln q e − k 1 t Alternatively, the linear pseudo-second-order model is expressed by the following [34]: t/q t = t/q e + 1/k 2 q e 2 Additionally, in order to circumvent possible erroneous correlations by the linear equations above, nonlinear correlations were also employed to check the pseudo-first-order and the pseudo-second-order models using the respective equations [51,52]: q t = q e (1 − e −k 1 t ) and q t = (q e 2 k 2 t)/(1 + q e k 2 t) where q e (mg g −1 ) and q t (mg g −1 ) are the adsorption capacities at equilibrium and time t (min), respectively. k 1 is the rate constant of the pseudo-first-order model, whereas k 2 is the rate constant of the pseudo-second-order model. As shown in Table 2, the calculated adsorption capacity at equilibrium, q e,cal , was derived from the linear pseudo-first-order model by plotting ln(q e -q t ) versus t, whereas it was extrapolated from the plot of t/q t versus t for the linear pseudo-second-order model. Likewise, q e,cal was computed from the nonlinear pseudo-first-order and pseudo-secondorder models by plotting q t versus t and applying the relevant equations given above. Interestingly, Table 2 discloses the correlation coefficients, R 2 , of 0.9821 and 0.9866 derived from the respective linear and nonlinear equations of the pseudo-first-order models, which are higher than the ones derived from the linear and nonlinear pseudo-second-order models, R 2 , of 0.6559 and 0.9851, respectively. Moreover, the comparison of the experimental capacity at equilibrium, q e,exp = 76 mg g −1 with those calculated, q e,cal , of 79.35 mg g −1 and 88.43 mg g −1 for the linear and nonlinear pseudo-first-order models [53], respectively, clearly reveal a better agreement than those derived from the pseudo-second-order models, thus suggesting that the adsorption of MR by CBP3 follows a pseudo-first-order kinetic model (c.f. the plausible interaction between the copolymers and MR dye in Figure S34). Reusability experiments were carried out in order to test the adsorbing performance of copolymer CBP3 towards MR after several adsorption-desorption cycles. Thus, a sample of CBP3 copolymer loaded with MR was ultrasonicated in deionized water for 10 min followed by its isolation through vacuum filtration over a membrane filter before adding the regenerated copolymer sample to a freshly prepared aqueous solution of MR. This procedure was repeated for several cycles, proving a removal efficiency of 90.4% for CBP3 even after eight cycles (Figure 8).
Conclusions
We report the synthesis of a new class of three metalorganic copolymers CBP1-3 bearing iron(II) clathrochelate unit and interlinked by anthracene groups via a typical copper-catalyzed [4 + 2] cycloaddition polymerization reaction conditions. The target copolymers were isolated in excellent yields and revealed excellent removal capacities of the carcinogenic dye methyl red from aqueous medium, especially CBP3, which disclosed an ultrafast and superior adsorption efficiency up to 100% in 30 min and exhibited a maximum adsorption capacity (qm) of 431 mg g -1 with the possibility to regenerate the hitherto mentioned polymer for several cycles. The novel iron (II) clathrochelate-based copolymers presented herein confer several advantages, particularly a versatile synthesis methodology, low cost, superior stability, and excellent adsorption capacity. Therefore,
Conclusions
We report the synthesis of a new class of three metalorganic copolymers CBP1-3 bearing iron(II) clathrochelate unit and interlinked by anthracene groups via a typical copper-catalyzed [4 + 2] cycloaddition polymerization reaction conditions. The target copolymers were isolated in excellent yields and revealed excellent removal capacities of the carcinogenic dye methyl red from aqueous medium, especially CBP3, which disclosed an ultrafast and superior adsorption efficiency up to 100% in 30 min and exhibited a maximum adsorption capacity (q m ) of 431 mg g −1 with the possibility to regenerate the hitherto mentioned polymer for several cycles. The novel iron (II) clathrochelate-based copolymers presented herein confer several advantages, particularly a versatile synthesis methodology, low cost, superior stability, and excellent adsorption capacity. Therefore, these materials are prominent candidates for environmental remediation applications, specifically as adsorbents of the hazardous azo dyes. Figure S34: Plausible interaction between the copolymer and MR dye; Table S1: Chemical structures of some azo dyes; Table S2: CHN analysis of CM3. Institutional Review Board Statement: "Not applicable" for studies not involving humans or animals.
Data Availability Statement:
The raw/processed data required to reproduce these findings can be shared upon demand. | 2023-07-06T08:15:17.589Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "46402807e1138e511a1abb0ea8bd68216664207a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "46402807e1138e511a1abb0ea8bd68216664207a",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247783900 | pes2o/s2orc | v3-fos-license | Differences Between Posttraumatic Growth and Resiliency: Their Distinctive Relationships With Empathy and Emotion Recognition Ability
Posttraumatic growth (PTG) and resiliency have been observed among people who experienced life crises. Given that the direct relationships between PTG and resiliency have been equivocal, it is important to know how they are different in conjunction with cognitive ability. The purpose of this study is to examine how perceived PTG and resiliency would be, respectively, associated with empathy and emotion recognition ability. A total of 420 college students participated in an online survey requiring them to identify emotions based on photographs of facial expressions, report their traumatic experiences, and respond to the PTG Inventory, Brief Resilience Scale, and Questionnaire of Emotional Empathy. The results suggest that perceived PTG was not associated with empathy but significantly predicted increased emotion recognition, whereas resiliency showed a negative relationship with empathy but no significant relationship with emotion recognition. These findings demonstrate that self-perceived PTG may be associated with cognitive ability, which could be due to one’s growth within relationships and social interactions. Even though growing after trauma may promote resilient characteristics, the current results indicate that PTG and resiliency may foster different outcomes. Since empathy and emotion recognition are affected by other contextual factors, future studies should assess how empathy and the type of errors in emotion recognition may be associated with situational factors that are beyond personal factors such as post-traumatic life experiences or personality.
INTRODUCTION
Many individuals may face a traumatic or highly stressful event at some point in their lives. That could mean experiencing a natural disaster, the loss of a loved one, an accident or injury, or conflicts within their family and relationships. Facing adversity can have the capacity to shatter and challenge one's core beliefs. This process can lead an individual to transform in a way that positively impacts their quality of life and helps them to realize how they have grown as a person (Tedeschi and Calhoun, 2004;Joseph and Butler, 2010;Joseph et al., 2012), known as posttraumatic growth or PTG. However, facing adversity does not always shake beliefs or make people struggle, but can rather allow them to simply bounce back, known as resiliency (Yao and Hsieh, 2019).
Posttraumatic growth explains the positive psychological changes as a result of a struggle with a major life crisis or traumatic event (Calhoun and Tedeschi, 2014;Tedeschi et al., 2018). This process may be reflected through an individual gaining a greater appreciation for life, relating to others more, making a spiritual or existential change, having an increased sense of personal strength, or realizing new possibilities in life (Tedeschi and Calhoun, 2004). After experiencing adversity, individuals may have the capacity to learn from the event and reshape the way that they perceive themselves, their lives, and their world. For example, after someone has dealt with the loss of a loved one, they may realize that their relationships with others are very important, and therefore, build stronger bonds with their family and friends. Doing so ultimately may increase their level of social support, allowing for safety and security if uncertain times were to present themselves again.
On the contrary, if an individual is able to recover from a traumatic experience by exhibiting certain characteristics (e.g., flexibility and optimism) along with using various resources that are available for them (e.g., adaptive coping strategies and social support), they could be described as being resilient. Resilience explains the likelihood that an individual can overcome highly stressful events, remaining psychologically healthy despite undergoing hardships (Rutter, 2007). Not to be misunderstood with PTG, which involves severe psychological struggle due to the challenged core beliefs following a trauma -resilience is typically understood to be the way that an individual "bounces back." For example, someone who has experienced a stressful financial pitfall may be able to use personal resources and mechanisms (e.g., coping skills, emotion regulation, hope, optimism) to help them push through the difficult time in order to recover to the level of financial stability they were able to maintain before the decline -regaining normality in their life (Duan et al., 2015). This means that they did not allow their finances to continue to plunder and negatively impact their life, but they also did not need to overwork themselves or overthink their beliefs enough to cause them to reevaluate several aspects of their life in order to change them in a profound or transformational way. In short, resilience focuses on adapting and adjusting to adversity with or without struggling, whereas, PTG focuses on transformative changes resulting from psychological struggle caused by shattered beliefs or worldview.
Due to these conceptual differences between PTG and resiliency, the relationship between them in literature is inconsistent. At least two studies have found that there is a negative relationship between PTG and resiliency (Levine et al., 2009;Zerach et al., 2013). This could possibly be due to highly resilient people being less influenced by a trauma experience, withholding their need for growth. Other studies found that there is a positive relationship between PTG and resiliency, suggesting that the more likely someone is to experience growth after trauma, the more likely they are to exhibit characteristics of resilience as well (Bensimon, 2012;Yu et al., 2014;Duan et al., 2015). Literature has identified a curvilinear relationship between PTG and resilience which may suggest there being a possibility of a certain threshold, or "tipping point" associated with the two constructs, where the more resilient an individual is, the more likely they are to exhibit growth following adversity or vice versa, up to a certain point in which either the individual could become too resilient to experience growth or be influenced by traumatic events (Li et al., 2015;Kaye-Tzadok and Davidson-Arad, 2016). And yet, studies have also found that there is no linear relationship between PTG and resiliency (DeViva et al., 2016;Vieselmeyer et al., 2017). Given that the direct relationships between PTG and resiliency are equivocal, it is important to further investigate the distinct characteristics of each concept.
In order to reveal potentially distinct characteristics of PTG and resiliency, the current study focuses on empathy and emotion recognition ability because previous research has found that empathy is positively associated with PTG (Tedeschi and Calhoun, 2004), but little study was done for resiliency. Emotional or affective empathy is the tendency to feel the emotions of other people while keeping an other-focused and compassionate perspective. It is the ability to understand the emotions of another person that is an automatic, and often unconscious, reaction commonly understood to be the meaning behind the phrase of placing oneself in another person's shoes (Mehrabian and Epstein, 1972). For example, a highly emotionally empathetic individual may cry during a movie where the main character's family member has passed away. This empathetic person is able to understand the emotions of that character so well, that they exhibit or feel those emotions within themselves. It does not solely pertain to feeling sorry or having pity for someone, but it is displayed by gaining a sense of connection for what someone else may be going through or is currently feeling. A highly empathetic person is then able to use a combination of sympathy and compassion to console and approach others in a meaningful and positive way (Batson and Shaw, 1991;Pavey et al., 2012).
Findings suggest that the more growth an individual has experienced following trauma, especially growth involving others, such as within relationships, social support, and being more compassionate or more connected to others, the more likely they are to be empathetic towards them (Tedeschi and Calhoun, 2004;Cofini et al., 2014). Therefore, having experienced adversity that provoked PTG may allow an individual to be better at feeling and understanding those similar emotions in other people (Swickert et al., 2012). On the other hand, at least one study (Morice-Ramat et al., 2018) indicates that certain levels of empathy may promote more resilient characteristics, but the relationships between empathy and resiliency need to be further studied, because being able to bounce back following a trauma involves intra-personal characteristics; thus, unlike PTG, resiliency is conceptually more distant from empathy that involves interpersonal characteristics.
Emotions are essentially what prepare us to deal with or react to important events and situations without having to think deeply about them (Ekman, 1972). Not only do people feel emotions and have the capacity to understand the emotions that someone else may be feeling, but people physically express emotions as well. There are seven basic and universal emotions: anger, fear, disgust, sadness, happiness, surprise, and contempt (Ekman, 1970(Ekman, , 1972Ekman and Keltner, 1997). These emotions are automatically expressed through our facial muscles when we experience them, known as facial expressions (Ekman et al., 1971). Research has found that highly empathetic people are better able to accurately identify facial expressions in others (Carr and Lutjemeier, 2005;Besel and Yuille, 2010). Empathy can be linked to mirror neurons that demonstrate neurological processes that coincide with someone's level of empathy (Debes, 2017). This suggests that having an increased level of understanding for other individuals' current emotional well-being is what aids in being able to read others' emotions. The most current PTG theoretical model has identified that self-recognized PTG can be associated with outcomes that span beyond well-being, including expanded coping repertories, increased compassion, and improved wisdom -all of which aid in the development, maintenance, and improvement of interpersonal relationships (Tedeschi et al., 2018). Therefore, since empathy and similar concepts are related to PTG, accurately identifying emotion expressions could be associated with PTG as well.
On the other hand, since being highly resilient is not directly and conceptually linked to empathy, the relationship between resiliency and emotion recognition ability is also unknown. In addition to the curvilinear relationship some studies have found between PTG and resilience, resilience researchers also suggested early on that there was a need to explore whether there is a capacity or threshold in which an individual can reach that "caps" their ability to continue to adapt, adjust, or be influenced by change over their lifespan of consistently withstanding adversity (Staudinger et al., 1993;Werner, 2005). Therefore, it may be important to explore this phenomenon in connection to social perspectives, relations, and interactions. It's possible that one's interpersonal development, in the contexts of emotional empathy and emotion recognition, may be affected over time due to a constant resistance or recovery to hardships. Overall, the ability to accurately read the emotions of others through their body language and facial expressions is a vital skill to have in daily life. Identifying the feelings of others allows an individual to determine their actions and behaviors toward them, providing that individual with the necessary information to respond accordingly.
Revealing the relationships between PTG and emotion recognition ability is also expected to make a theoretical contribution, because PTG reports are retrospective, requiring an individual to reflect on how they were before the traumatic event which, in turn, creates discrepancies between self-reported PTG and actual growth and/or cognitive improvement (Frazier et al., 2009). It is possible that people may amplify when they estimate how they changed by having a distorted view of their growth following the trauma, or simply not know just how much of an improvement they actually made (Taylor et al., 2000). Therefore, there has been debate on whether perceived growth is an illusory concept that is susceptible to deception (Maercker and Zoellner, 2004). It is important to examine how perceived PTG is related to cognitive ability in order to establish a concrete understanding of PTG's benefits in someone's life. Even though PTG is conceptually linked to increased empathy levels, and empathy shares a positive relationship with emotion recognition ability (ERA), current literature has not directly examined the relationship between PTG and ERA. Experiencing growth after adverse experiences could improve cognitive processing due to the individual's increased participation in social settings (Stephens et al., 2013) and cognitive/emotional processing that they are engaged with. Therefore, the current study aimed to investigate the relationships between perceived PTG, resilience, empathy, and ERA. Given that this is the first study that investigates the associations among all these variables, no specific hypotheses were generated. However, due to the equivocal association between PTG and resilience, we expected that the size and direction of the relationships between PTG and empathy/ERA would be different than the relationships between resilience and empathy/ERA.
Participants and Procedure
The sample consisted of 420 undergraduate students at a midwestern university in the US who had a mean age of 21.04 years (SD = 5.15). Approximately 65% of participants identified as White, 12% as African American, 10% being of Middle Eastern Heritage, 7% as Asian, and 5% identified as other. Additionally, about 80% of the sample were female and 19% were male. Two of the participants (less than 1%) did not provide their sex.
Participants were recruited through a university's subject pool and received class credit upon completion. They were first asked to provide demographic information and to identify emotions based on photographs of facial expressions. They then completed a questionnaire regarding empathy, which was followed by identifying their trauma experiences and PTG. Lastly, they completed a questionnaire measuring resilience. The study was approved by an internal review board (IRB-FY2020-16). Data were analyzed using SPSS 26.
Traumatic Events
Participants indicated which out of 13 traumatic events (e.g., "natural disaster, " "accident or injury, " "death of someone close to you") they had experienced in the last five years, a measure that has been used in previous research (Taku, 2011). Following the trauma checklist, they identified which event impacted them the most (Taku, 2013).
Posttraumatic Growth
The PTG Inventory-Short Form (PTGI-SF; Cann et al., 2010; α = 0.91) was used to measure the participants' level of perceived PTG as a result of the traumatic event that most impacted them (e.g., "I changed my priorities about what is important in life"). For 10 items, the participants were asked to indicate the degree to which each change had occurred for them on a 6-point Likert scale ranging from 0, "not at all, " to 5, "very great degree." Participants that did not identify a trauma event (n = 10) were excluded when analyzing total perceived PTG scores.
Empathy
The Questionnaire Measure of Emotional Empathy (QMEE; Mehrabian and Epstein, 1972; α = 0.87) with 33 items (e.g., "it makes me sad to see a lonely stranger in a group") was rated on a 9-point Likert scale ranging from 1, "very strong disagreement, " to 9, "very strong agreement."
Emotion Recognition
The Standard Expressor Version of the Japanese and Caucasian Facial Expressions of Emotions (JACFEE; Matsumoto and Ekman, 1988;Ekman and Matsumoto, 1993) was used to measure an individual's ability of identifying the seven universal emotions: anger, disgust, contempt, fear, happiness, sadness, and surprise. The original set consists of a total of 130 photographed expressions from nine expressers (i.e., five Caucasian males, three Caucasian females, and one Japanese male). However, to diversify the measure as much as possible, as well as account for burnout and online efficiency, only a total of 24 photographs were used; a set of 8 facial expressions (i.e., anger, contempt, disgust, fear, happiness, sadness, surprise, and neutral) from each of 3 expressers (i.e., one Japanese male, one Caucasian female, and one Caucasian male). The 24 photographs were presented in a randomized order where the participants were asked to answer, "what emotion is this person expressing?" The amount of expressions identified correctly out of the 24 emotion expressions was used for a total ERA score. Only participants that answered all 24 items were included when analyzing ERA.
Posttraumatic Growth, Resilience, and Empathy
As shown in Table 1, a weak positive relationship was found between PTG and resilience, r = 0.19, p < 0.01. PTG and empathy were not correlated with one another (r = 0.09, p = 0.08), but resilience and empathy were found to be negatively correlated with one another, r = −0.34, p < 01.
PTG, Resilience, Empathy and Emotion Recognition Ability
Unlike self-perceived scales (i.e., PTG, resilience, and empathy), ERA reflects cognitive abilities through the expression and identification of universal emotions, and therefore, our participants were able to recognize more than half of the emotions accurately, leading to a non-normal distribution. Due to that, the mean score of ERA, 19, was used as a cutoff to create two groups: an ERA-low group (n = 144) that identified less than 19 emotions out of the 24 correctly, and an ERA-high group (n = 245) that identified 19 or more emotions correctly. A logistic regression model was created to test the likelihood that PTG and resilience would predict ERA group differences. As displayed in Table 2, the model as a whole was statistically significant: X 2 (2, N = 372) = 7.38, p = 0.03. This indicated that approximately 2% of the variance of ERA can be explained by PTG and resilience. However, only PTG showed to significantly predict (p = 0.02) differences between the ERA-low group and the ERA-high group. Whereas, resilience showed to make no significant contribution in predicting ERA group differences (p = 0.10).
DISCUSSION
The current study was designed to examine the relationships between PTG, resiliency, empathy, and ERA. Specifically, we investigated the ways in which PTG and resiliency may be different by analyzing their potential distinctive relationships with empathy and one's accuracy in identifying facial expressions. Overall, PTG and resiliency were found to have a significant but weak positive relationship, and their respective relationships with both empathy and emotion recognition were different. First, PTG and empathy were found to be uncorrelated. This suggests that positive changes an individual perceives as a result of a trauma, such as appreciating life, having more compassion, and being able to do better things in life, and their ability to understand how others feel were independent from each other. This may be because PTG includes multiple domains, ranging from content that is not directly related to empathy, such as finding new opportunities that would not have been available without the specific triggering event, to content that should be related to empathy, such as being more compassionate for others, and they might cancel each other out. On the other hand, empathy and resilience showed a negative relationship. This could explain that the more resilient someone is, the less empathetic they are or vice versa. Since one study has suggested that empathy could be a predisposition of resilience (Morice-Ramat et al., 2018), the current findings may suggest that the more resilient people become, the less likely they may be able to relate and share emotional experiences with others -or perhaps, that the more empathic they are, the less resilient they are. It is possible that there are other factors, such as self-sufficiency, autonomy, self-confidence or toughness, that may cause these constructs to be inversely related with one another. The heightened ability to continuously overcome obstacles may cause highly resilient people to develop and remain at an emotional equilibrium, not being heavily influenced by certain situations or susceptible to others' emotional states, and therefore, making them less sensitive to others who may be strongly influenced by their daily circumstances that causes them to both feel and express a wide range of emotions. These results of inconsistent relationships with empathy indicate PTG and resilience differ.
Second, the current study suggested that PTG, but not resiliency, predicted emotion recognition ability. More specifically, PTG significantly predicted ERA group differences, where higher perceived PTG levels were more likely to be associated with belonging to the ERA-high group, explaining that higher growth is more likely to lead to increased emotion recognition. This suggests that perceived PTG may not be entirely illusory, since it was associated with the cognitive abilities of identifying emotions on pictures, which is unrelated to each person's life narratives. On the other hand, resilience did not significantly predict ERA group differences, suggesting that being resilient and cognitive abilities in reading others' emotions are independent from each other. Even though someone highly resilient may show a lower level of empathy, that does not necessarily mean that their ability to identify emotions in others is also low, since the results showed no significance. It is important to note, however, due to the non-normal distribution of ERA scores, participants were assigned into the two ERA groups (low-high) using the cutoff of 19 for this study, which means some of the participants in an ERA-"low" group were still able to identify 75% of the emotions correctly (e.g., 18 out of the 24 pictures). Similarly, some of the participants in an ERA-"high" group were only able to identify 83% of the emotions correctly (E.g., 20 out of 24 pictures); thus, they showed a few errors as well.
IMPLICATIONS, LIMITATIONS AND FUTURE DIRECTIONS
Posttraumatic growth and resiliency are both processes that one may experience following a potential traumatic event that share similar characteristics, but are also very distinct from one another. Both growing and bouncing back after adversity are positive constructs but it is important to understand the potential differences. This study lends insight into the ways in which PTG was related to ERA but not empathy, but when it comes to resilience, a decrease in empathy with no conclusive relationships with emotion recognition accuracy.
Empathy and ERA are important because they provide the knowledge someone needs that allows them to respond to others in the most constructive way. For example, an empathetic person who is also fairly good at recognizing emotions is able to notice that their friend is sad based on their facial expressions and then empathize with them because they know what sadness feels like. Since they have the knowledge to accurately identify that their friend is sad and use their previous life experiences to understand their sadness, they are able to recall what it is they may have needed from someone when they were in the same position. They may recall that in their own time of sadness, they desired a hug from their loved one or wanted to talk about what caused them to feel that way. Due to that understanding, they are able to react to their friend in a similar way. This may then provide comfort and support to their friend, increasing the quality of their relationship which would lead to a stronger bond between them. Being both empathetic and high in person perception allows an individual to notice behaviors and resonate with them, further allowing someone to respond in an appropriate manner. Strong interpersonal skills are necessary for effectively communicating, connecting, and collaborating with others, prospering in professional matters, as well as developing and maintaining a safe and secure social support systemwhich can all result in a good quality of life and overall wellbeing.
The more independent, intrapersonal nature of resilience may be the biggest aspect in which it differs from PTG. Resilience causes one to call upon personal skills and characteristics to recover from tragedy, however, PTG causes one to do that but in addition to changing the way in which they relate, interact, and express themselves with others, recently demonstrated in a cross-cultural study that showed having positive experiences when disclosing a trauma to others is the only significant predictor for PTG across 10 countries (Taku et al., 2021). PTG may be more realized when the experience was shared with at least one person who can be there for them, whereas resiliency may be more recognized without the presence of others. Programs that stress the importance of being resilient and programs that stress the importance of transformational growth may be able to complement each other well. Focusing on implementing practices that involve becoming more empathic toward others may benefit individuals who are highly resilient. The applications of PTG can now highlight not only leading to feeling stronger, appreciating life more, becoming spiritually connected, realizing new possibilities in life, and developing stronger bonds with others, but may include cognitive abilities in judging others' emotions. This study provides evidence showing that perceived PTG may not be entirely illusory, and may portray quantifiable objective positive changes in terms of emotion recognition accuracy.
Even though this study provides insight into the relationships between PTG, resiliency, empathy, and ERA, there are limitations. The lack of diversity within the sample demographics such as age, race, and gender, makes it difficult to generalize these findings to various other populations. Due to the lack of timing how long it took the participants to identify the facial expressions, the participants had more time to ruminate, which could have contributed to the overall high accuracy rate. Lastly, the selfreport online nature of the study makes the research susceptible to inaccuracy, however, the sample size is substantial enough to buffer against most inaccuracies.
Despite these limitations, this study has fostered further exploration into the differences between PTG and resiliency. Specifically, we identified that the factors that can distinguish PTG and resiliency may be within interpersonal constructs such as empathy and ERA. Future research should explore what other factors can help explain the differences as well as overlaps between PTG and resiliency. Furthermore, it is important to investigate what types of growth may lead to cognitive improvement over others along with which emotions (e.g., anger, sadness) are easier to identify over others among people who experienced PTG as opposed to people who are highly resilient. Expanding this study to a wider audience of various different backgrounds would make the findings more applicable and generalizable to more populations. Research should also explore whether the amount/type of trauma events experienced affects an individual's PTG, resilience, empathy, and emotion recognition ability. Replicating this study with the addition of a cognitive empathy measure, a more diverse expressor measure, and a timing feature for identifying the facial expressions, would further provide insight into the relationships between PTG, resiliency, empathy, and emotion recognition.
In summary, the current study identified how characteristics of PTG and resiliency may differ from one another through interpersonal contexts such as with empathy and identifying facial expressions. These findings suggest that it may be important to acknowledge the potential differences between PTG and resiliency and continue to pinpoint the ways in which PTG and resiliency can be better understood in order to properly approach, teach, and implement these constructs.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Oakland University Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. | 2022-03-30T14:07:16.825Z | 2022-03-28T00:00:00.000 | {
"year": 2022,
"sha1": "2e2086d5db8904557e3a3525f00f85a26845044a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2e2086d5db8904557e3a3525f00f85a26845044a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73474301 | pes2o/s2orc | v3-fos-license | Fabrication of Cu2ZnSnS4 (CZTS) Nanoparticle Inks for Growth of CZTS Films for Solar Cells
Cu2ZnSnS4 (CZTS) is a promising candidate material for photovoltaic applications; hence, ecofriendly methods are required to fabricate CZTS films. In this work, we fabricated CZTS nanocrystal inks by a wet ball milling method, with the use of only nontoxic solvents, followed by filtration. We performed centrifugation to screen the as-milled CZTS and obtain nanocrystals. The distribution of CZTS nanoparticles during centrifugation was examined and nanocrystal inks were obtained after the final centrifugal treatment. The as-fabricated CZTS nanocrystal inks were used to deposit CZTS precursors with precisely controlled CZTS films by a spin-coating method followed by a rapid high pressure sulfur annealing method. Both the grain growth and crystallinity of the CZTS films were promoted and the composition was adjusted from S poor to S-rich by the annealing. XRD and Raman characterization showed no secondary phases in the annealed film, the absence of the detrimental phases. A solar cell efficiency of 6.2% (open circuit voltage: Voc = 633.3 mV, short circuit current: Jsc = 17.6 mA/cm2, and fill factor: FF = 55.8%) with an area of 0.2 cm2 was achieved based on the annealed CZTS film as the absorber layer.
Introduction
In recent years, kesterite Cu 2 ZnSnS 4 (CZTS) and Cu 2 ZnSn(S, Se) 4 (CZTSSe) solar cells have drawn attention because of their promise as an absorbing layer for applications in thin-film photovoltaics owing to its low cost, nontoxicity and earth abundance of its elemental components as well as an adjustable bandgap [1][2][3]. One advantage of CZTS over other kinds of chalcopyrite-related solar cells is its suitability for achieving high efficiency solar cells through nonvacuum fabrication methods. Furthermore, the world record conversion efficiency of CZTSSe solar cells is currently 12.6% [4] based on a hydrazine pure solution approach. There have been several reports on the fabrication of CZTS or CZTSSe solar cells. Both vacuum methods, such as sputtering [5], coevaporation [6], epitaxial methods [7], and nonvacuum methods [8][9][10][11], have been reported. Nonvacuum methods are lower in cost and more suitable for mass production than are vacuum methods. Among nonvacuum methods, the highest conversion efficiency CZTS solar cells are based on molecular precursor solutions or nanoparticle dispersions [12,13]. Although these kinds of fabrication methods are appealing because of their low complexity, low-cost, and scalability, such methods are complicated by the need for toxic solvents or metal-organic solutions that contain large amounts of organic contaminants, which induce cracking during the following annealing process [14,15]. The use of the toxic and unstable solvent hydrazine requires all processes for ink and film preparation to be performed under an inert atmosphere. As a result, it is difficult to adapt this approach to low-cost and large-scale solar cell fabrication.
In this work, we report a simple technique for fabricating CZTS nanoparticles ink by a wet ball milling method using nontoxic ethanol and 2-(2-ethoxyethoxy) ethanol as the solvents. A similar study has been reported by Woo, et al.; an efficiency of 7% was achieved [16]. However, the use of CZTS powder to fabricate CZTS film is expected to have the following benefits. (1) The fabrication process is simpler. (2) The growth of grains will be promoted because the grain boundary meltdown temperature is lowered. (3) Stoichiometric film compositions are easier to obtain because the chemical reactions are less complicated. The use of nontoxic solvents is more cost-effective and environment friendly, which is important for practical photovoltaic applications. The ink was used to fabricate CZTS thin films (precursors) by a spin-coating method, followed by annealing the precursor in a sulfur-rich atmosphere. The commercial CZTS powder was obtained from Mitsui Kinzoku, and detailed information of its characteristics is currently unavailable. The sulfur vapor not only prevents the formation of volatile Sn-S compounds but also supplies S atoms to make the CZTS films sulfur-rich, which is a requirement for high performance solar cells. The procedure for fabricating CZTS films from CZTS powder is reported in detail in this paper.
Sample Preparation
Figure 1a-c illustrates the process for fabricating CZTS nanoparticle ink. In the ball milling system, a 1-mm ball, 50-µm ceramic balls, and CZTS powder were mixed together in the mill pot. A 5-mL portion of ethanol was added to improve the wet milling effect. Figure 1a shows a schematic diagram of the milling system. The milling pots were rotated along their own axis together with the base plate. The milling process was performed for 40 h. After ball milling, the whole mixture was strained through a filter screen to obtain particles smaller than 32 µm, and nontoxic ethanol and 2-(2-ethoxyethoxy) ethanol were used to wash the milling ball to increase nanoparticle recovery, as shown in Figure 1b. Through this procedure, the milling balls and large particles of CZTS (>32 µm) were removed whereas a mixture of relatively small CZTS particles (<32 µm) and the solvents were retained. We used 2-(2-ethoxyethoxy) ethanol as a dispersion agent to prevent coagulation of the nanoparticles, and ethanol was used to reduce the viscosity of the solvent and promote precipitation of large particles during the following centrifugation. The resulting solution was then ultrasonically processed to disperse the particles in the solvents for 1 h. In this work, we report a simple technique for fabricating CZTS nanoparticles ink by a wet ball milling method using nontoxic ethanol and 2-(2-ethoxyethoxy) ethanol as the solvents. A similar study has been reported by Woo, et al.; an efficiency of 7% was achieved [16]. However, the use of CZTS powder to fabricate CZTS film is expected to have the following benefits. (1) The fabrication process is simpler. (2) The growth of grains will be promoted because the grain boundary meltdown temperature is lowered. (3) Stoichiometric film compositions are easier to obtain because the chemical reactions are less complicated. The use of nontoxic solvents is more cost-effective and environment friendly, which is important for practical photovoltaic applications. The ink was used to fabricate CZTS thin films (precursors) by a spin-coating method, followed by annealing the precursor in a sulfur-rich atmosphere. The commercial CZTS powder was obtained from Mitsui Kinzoku, and detailed information of its characteristics is currently unavailable. The sulfur vapor not only prevents the formation of volatile Sn-S compounds but also supplies S atoms to make the CZTS films sulfurrich, which is a requirement for high performance solar cells. The procedure for fabricating CZTS films from CZTS powder is reported in detail in this paper.
Sample Preparation
Figure 1a-c illustrates the process for fabricating CZTS nanoparticle ink. In the ball milling system, a 1-mm ball, 50-μm ceramic balls, and CZTS powder were mixed together in the mill pot. A 5-mL portion of ethanol was added to improve the wet milling effect. Figure 1a shows a schematic diagram of the milling system. The milling pots were rotated along their own axis together with the base plate. The milling process was performed for 40 h. After ball milling, the whole mixture was strained through a filter screen to obtain particles smaller than 32 μm, and nontoxic ethanol and 2-(2ethoxyethoxy) ethanol were used to wash the milling ball to increase nanoparticle recovery, as shown in Figure 1b. Through this procedure, the milling balls and large particles of CZTS (>32 μm) were removed whereas a mixture of relatively small CZTS particles (<32 μm) and the solvents were retained. We used 2-(2-ethoxyethoxy) ethanol as a dispersion agent to prevent coagulation of the nanoparticles, and ethanol was used to reduce the viscosity of the solvent and promote precipitation of large particles during the following centrifugation. The resulting solution was then ultrasonically processed to disperse the particles in the solvents for 1 h. The as-prepared mixture of CZTS and solvents was first centrifuged at a low speed 1500 rpm to remove particles over several μm in size. The precipitate was disposed of and the upper layer of the solution was decanted for further centrifugal treatment. The aforementioned processes were repeated three times at a higher speed of 6000 rpm and the nanoparticles were obtained. The nanoparticle ink was obtained with a concentration of 200 mg/mL by adjusting the quantity of ethanol. The The as-prepared mixture of CZTS and solvents was first centrifuged at a low speed 1500 rpm to remove particles over several µm in size. The precipitate was disposed of and the upper layer of the solution was decanted for further centrifugal treatment. The aforementioned processes were repeated three times at a higher speed of 6000 rpm and the nanoparticles were obtained. The nanoparticle ink was obtained with a concentration of 200 mg/mL by adjusting the quantity of ethanol. The nanoparticle ink was then used to fabricate CZTS precursors by spin-coating. Figure 2a shows a schematic diagram of the spin-coating system. The substrate was rotated at a speed of 2000 rpm and the CZTS ink was dripped on at a speed of 5 µL/min. The final CZTS precursor film showed a thickness of 1-1.5 µm. Finally, the precursors were annealed in a sulfur-rich atmosphere to improve the grain size and crystallinity. The sulfurization process was conducted by sealing the precursor and powdered sulfur into a vacuum quartz tube with a length of 15 cm, which was placed in the annealing furnace (FP410, Yamato Company, Tokyo, Japan), as shown in Figure 1b. The furnace was heated to 600 • C within 15 min and the vapor pressure of sulfur was approximately 0.1 atm. The annealing process was performed for 20 min after the system achieved 600 • C. Then the sample was allowed to cool to room temperature naturally. Nanomaterials 2019, 9, x FOR PEER REVIEW 3 of 10 nanoparticle ink was then used to fabricate CZTS precursors by spin-coating. Figure 2a shows a schematic diagram of the spin-coating system. The substrate was rotated at a speed of 2000 rpm and the CZTS ink was dripped on at a speed of 5 μL/min. The final CZTS precursor film showed a thickness of 1-1.5 μm. Finally, the precursors were annealed in a sulfur-rich atmosphere to improve the grain size and crystallinity. The sulfurization process was conducted by sealing the precursor and powdered sulfur into a vacuum quartz tube with a length of 15 cm, which was placed in the annealing furnace (FP410, Yamato Company, Tokyo, Japan), as shown in Figure 1b. The furnace was heated to 600 °C within 15 min and the vapor pressure of sulfur was approximately 0.1 atm. The annealing process was performed for 20 min after the system achieved 600 °C. Then the sample was allowed to .
(a) (b) A typical structure of a CZTS solar cell is shown in Figure 3. The as-grown CZTS film was used as the absorbing layer. A CdS layer with a thickness of 50 nm was fabricated by a chemical bath deposition method as the buffer layer. Intrinsic ZnO with a thickness of 100 nm and B-doped ZnO with a thickness of 400 nm were then sputtered as the window layer. To measure the performance of the solar cell, an Al grid was evaporated as the front electrode.
Characterization
The morphology of the annealed CZTS films was characterized with a scanning electron microscope (SEM, JSM-7001F, Tokyo, Japan) equipped with a JED-2300T energy dispersive spectroscopy (EDS) system (Tokyo, Japan) operating at an acceleration voltage of 10 kV. EDS, for compositional analysis, was measured at an acceleration voltage of 15 kV. The grain size distribution was measured with a transmission electron microscope (TEM, JEOL JEM-2100F, Tokyo, Japan). X-ray diffraction (XRD) analysis was performed with a Rigaku SmartLab2 with a Cu-K source and the generator was set to 20 mA and 40 kV. Raman measurements were performed with a RENISHAWproduced inVia RefleX type Raman spectrometer equipped with an Olympus microscope with a 1000 magnification lens at room temperature. The excitation laser line was 532 nm. The solar cell performance was measured with a 913 CV type current-voltage (J-V) tester (AM1.5) provided by a EKO (LP-50B, Tokyo, Japan) solar simulator. The simulator was calibrated with a standard GaAs A typical structure of a CZTS solar cell is shown in Figure 3. The as-grown CZTS film was used as the absorbing layer. A CdS layer with a thickness of 50 nm was fabricated by a chemical bath deposition method as the buffer layer. Intrinsic ZnO with a thickness of 100 nm and B-doped ZnO with a thickness of 400 nm were then sputtered as the window layer. To measure the performance of the solar cell, an Al grid was evaporated as the front electrode. Nanomaterials 2019, 9, x FOR PEER REVIEW 3 of 10 nanoparticle ink was then used to fabricate CZTS precursors by spin-coating. Figure 2a shows a schematic diagram of the spin-coating system. The substrate was rotated at a speed of 2000 rpm and the CZTS ink was dripped on at a speed of 5 μL/min. The final CZTS precursor film showed a thickness of 1-1.5 μm. Finally, the precursors were annealed in a sulfur-rich atmosphere to improve the grain size and crystallinity. The sulfurization process was conducted by sealing the precursor and powdered sulfur into a vacuum quartz tube with a length of 15 cm, which was placed in the annealing furnace (FP410, Yamato Company, Tokyo, Japan), as shown in Figure 1b. The furnace was heated to 600 °C within 15 min and the vapor pressure of sulfur was approximately 0.1 atm. The annealing process was performed for 20 min after the system achieved 600 °C. Then the sample was allowed to .
(a) (b) A typical structure of a CZTS solar cell is shown in Figure 3. The as-grown CZTS film was used as the absorbing layer. A CdS layer with a thickness of 50 nm was fabricated by a chemical bath deposition method as the buffer layer. Intrinsic ZnO with a thickness of 100 nm and B-doped ZnO with a thickness of 400 nm were then sputtered as the window layer. To measure the performance of the solar cell, an Al grid was evaporated as the front electrode.
Characterization
The morphology of the annealed CZTS films was characterized with a scanning electron microscope (SEM, JSM-7001F, Tokyo, Japan) equipped with a JED-2300T energy dispersive spectroscopy (EDS) system (Tokyo, Japan) operating at an acceleration voltage of 10 kV. EDS, for compositional analysis, was measured at an acceleration voltage of 15 kV. The grain size distribution was measured with a transmission electron microscope (TEM, JEOL JEM-2100F, Tokyo, Japan). X-ray diffraction (XRD) analysis was performed with a Rigaku SmartLab2 with a Cu-K source and the generator was set to 20 mA and 40 kV. Raman measurements were performed with a RENISHAWproduced inVia RefleX type Raman spectrometer equipped with an Olympus microscope with a 1000 magnification lens at room temperature. The excitation laser line was 532 nm. The solar cell performance was measured with a 913 CV type current-voltage (J-V) tester (AM1.5) provided by a EKO (LP-50B, Tokyo, Japan) solar simulator. The simulator was calibrated with a standard GaAs
Characterization
The morphology of the annealed CZTS films was characterized with a scanning electron microscope (SEM, JSM-7001F, Tokyo, Japan) equipped with a JED-2300T energy dispersive spectroscopy (EDS) system (Tokyo, Japan) operating at an acceleration voltage of 10 kV. EDS, for compositional analysis, was measured at an acceleration voltage of 15 kV. The grain size distribution was measured with a transmission electron microscope (TEM, JEOL JEM-2100F, Tokyo, Japan). X-ray diffraction (XRD) analysis was performed with a Rigaku SmartLab2 with a Cu-K source and the generator was set Nanomaterials 2019, 9,336 4 of 10 to 20 mA and 40 kV. Raman measurements were performed with a RENISHAW-produced inVia RefleX type Raman spectrometer equipped with an Olympus microscope with a 1000 magnification lens at room temperature. The excitation laser line was 532 nm. The solar cell performance was measured with a 913 CV type current-voltage (J-V) tester (AM1.5) provided by a EKO (LP-50B, Tokyo, Japan) solar simulator. The simulator was calibrated with a standard GaAs solar cell to obtain the standard illumination density (100 mW/cm 2 ).
Centrifugation to Obtain CZTS Nanoparticle Ink
Figure 4a-e shows TEM images of the CZTS particle distribution of the dispersion subjected to different centrifugation conditions. Figure 4a shows the distribution of CZTS particles for the CZTS dispersion without a centrifugal treatment. The small particles and large particles agglomerated together to form large clusters such that the boundaries between particles became unclear and it was not possible to tell the size of the particles; hence, the larger and smaller particles and nanoparticles were not separated. Figure 4b shows an TEM image of the CZTS ink centrifuged for 10 min at 1500 rpm. A portion of the large particles was removed, which reduced the agglomeration. The particle boundaries were clear; however, particles larger than several hundred nm remained. To further reduce the size of the particles, the dispersion was centrifuged at a high speed of 6000 rpm for 10, 20, and 30 min. The results are shown in Figure 4c-e, respectively. The sample shown in Figure 4c, had the largest particles (in the range of 100 to 200 nm) and almost no agglomeration was observed. In sample (d), particles remaining in the dispersion were smaller than 100 nm, indicating that nanoparticles were obtained. The particle size of sample (e) was in the range of 50 to 100 nm, which indicated that after the treatment to obtain sample (d), the particle size of the dispersion was no longer affected by centrifugation because of the limitations of final particle sizes generated by ball milling processes.
Centrifugation to Obtain CZTS Nanoparticle Ink
Figure 4a-e shows TEM images of the CZTS particle distribution of the dispersion subjected to different centrifugation conditions. Figure 4a shows the distribution of CZTS particles for the CZTS dispersion without a centrifugal treatment. The small particles and large particles agglomerated together to form large clusters such that the boundaries between particles became unclear and it was not possible to tell the size of the particles; hence, the larger and smaller particles and nanoparticles were not separated. Figure 4b shows an TEM image of the CZTS ink centrifuged for 10 min at 1500 rpm. A portion of the large particles was removed, which reduced the agglomeration. The particle boundaries were clear; however, particles larger than several hundred nm remained. To further reduce the size of the particles, the dispersion was centrifuged at a high speed of 6000 rpm for 10, 20, and 30 min. The results are shown in Figure 4c-e, respectively. The sample shown in Figure 4c, had the largest particles (in the range of 100 to 200 nm) and almost no agglomeration was observed. In sample (d), particles remaining in the dispersion were smaller than 100 nm, indicating that nanoparticles were obtained. The particle size of sample (e) was in the range of 50 to 100 nm, which indicated that after the treatment to obtain sample (d), the particle size of the dispersion was no longer affected by centrifugation because of the limitations of final particle sizes generated by ball milling processes.
Deposition of CZTS Precursors
The CZTS nanoparticle inks were used to deposit the CZTS precursors on glass substrates by a spin-coating method. The speed of the substrate was approximately 2000 rpm and 5 μL of CZTS ink was dripped at the center of the substrate for each drop, which was repeated 10 times to obtain a film with a thickness of 1-1.5 μm. Figure 5a-c shows the surface morphology of the CZTS film with
Deposition of CZTS Precursors
The CZTS nanoparticle inks were used to deposit the CZTS precursors on glass substrates by a spin-coating method. The speed of the substrate was approximately 2000 rpm and 5 µL of CZTS ink was dripped at the center of the substrate for each drop, which was repeated 10 times to obtain a film with a thickness of 1-1.5 µm. Figure 5a-c shows the surface morphology of the CZTS film with different magnifications. The SEM image showed a compact morphology with grains smaller than 100 nm without cracks and no large particles were observed. The specific grain size could not be measured because of the small boundaries between grains. Because the precursor was only grown at room temperature, an additional high-temperature treatment was necessary to improve the grain size and crystallinity of the film.
Annealing of the Precursor
To induce grain growth and reduce the residual organic impurities, the CZTS precursor was annealed in an atmosphere with a high sulfur vapor pressure for 20 min at a temperature of 600 °C. Figure 6a,b shows the surface and cross-sectional SEM images of the CZTS films after annealing, respectively. Comparing the precursor morphology, as shown in Figure 5, the grain size increased markedly. The final grain size ranged from several hundred nm to several μm and cracks begin to appear between the grains, either because of grain growth or decomposition of the CZTS particles. According to the cross-sectional image (Figure 6b), the grains extended throughout the film in the thickness direction, which is expected for high-quality films. However, cracks stretching from the surface to the bottom of the film were also observed (marked by the red arrow), indicating the low density of the film. One explanation for this cracking was reported by Scragg, et al. owing to decomposition of CZTS film, as shown in following reactions (1) and (2) [17].
SnS(s) ⇌ SnS(g)
One solution to overcome this issue is to reduce the annealing temperature to prevent equilibrium (1) from shifting to the right and extending the annealing time to ensure maintain the crystallinity.
Annealing of the Precursor
To induce grain growth and reduce the residual organic impurities, the CZTS precursor was annealed in an atmosphere with a high sulfur vapor pressure for 20 min at a temperature of 600 • C. Figure 6a,b shows the surface and cross-sectional SEM images of the CZTS films after annealing, respectively. Comparing the precursor morphology, as shown in Figure 5, the grain size increased markedly. The final grain size ranged from several hundred nm to several µm and cracks begin to appear between the grains, either because of grain growth or decomposition of the CZTS particles. According to the cross-sectional image (Figure 6b), the grains extended throughout the film in the thickness direction, which is expected for high-quality films. However, cracks stretching from the surface to the bottom of the film were also observed (marked by the red arrow), indicating the low density of the film. One explanation for this cracking was reported by Scragg, et al. owing to decomposition of CZTS film, as shown in following reactions (1) and (2) [17].
One solution to overcome this issue is to reduce the annealing temperature to prevent equilibrium (1) from shifting to the right and extending the annealing time to ensure maintain the crystallinity.
Cu ZnSnS ⇌ Cu S(s) + ZnS(s) + SnS(s) + 1/2 ( ) (1) One solution to overcome this issue is to reduce the annealing temperature to prevent equilibrium (1) from shifting to the right and extending the annealing time to ensure maintain the crystallinity. To make a comparison, CZTS film using centrifugation condition: 1500 rmp for 10 min was also annealed with the same annealing condition and completed solar cell structure (Please refer to the Supplementary Materials). Table 1 shows the composition of the CZTS precursor and annealed film, as determined by energy-dispersive X-ray spectroscopy (EDX). The precursor had a sulfur composition less than 50% whereas the sulfur content increased to 50.5% after annealing, indicating that the film was converted from sulfur poor to sulfur-rich, which produces p-type CZTS films. It has been widely reported that Zn-rich (Zn/Sn > 1.0) films are required for fabricating high-performance CZTS solar cells [18,19], meaning that the composition of our CZTS films needed to be adjusted. One possible way to adjust the film to Zn-rich is to fabricate a thin layer of ZnS nanoparticles between the CZTS precursor and Mo back-contact, such that in the following annealing step, both Zn and S will be supplemented. Table 1. Composition of precursor and annealed film as measured by energy-dispersive X-ray spectroscopy (EDX). Figure 7 shows the XRD patterns of the precursor and annealed film of CZTS. The crystallinity was also improved by high-temperature annealing. The sulfurization process induced sharpening and strengthening of the peaks. All the peaks of the precursor and the annealed film were assigned to kesterite CZTS. No peaks of secondary phases, such as ZnS and Cu 2 S, which easily form at high temperatures [20], were detected by XRD. However, XRD alone is incapable of identifying small amounts of secondary phases because of its detection limits. To complement this method, we also performed Raman measurements to confirm the absence of secondary phases. Raman spectra of the precursor and annealed CZTS thin films are shown in Figure 8. The lower spectrum shows the annealed CZTS film with peak fitting by a Lorentzian curve. According to the figure, the precursor showed one peak at 330 cm −1 , corresponding to the A mode of kesterite CZTS. The annealed film exhibited a typical Raman spectrum of kesterite CZTS films with three peaks at 285, 330, and 369 cm −1 , corresponding to the two A symmetry modes and a B symmetry mode of the CZTS kesterite structure, respectively [21,22]. This result also indicated that no secondary phases are observed after the annealing process.
Cu (%) Zn (%) Sn (%) S (%) Zn/Sn
The annealed CZTS films were used to fabricate complete solar cell structures. Solar cell performance was evaluated under standard conditions. The conversion efficiency of three cells on the same sample was measured as shown in Table 2. The solar cell ranged from 2.5% to 6.2%, indicating ununiform solar cell performance due to the poor film quality as shown in Figure 7. Figure 7 shows the XRD patterns of the precursor and annealed film of CZTS. The crystallinity was also improved by high-temperature annealing. The sulfurization process induced sharpening and strengthening of the peaks. All the peaks of the precursor and the annealed film were assigned to kesterite CZTS. No peaks of secondary phases, such as ZnS and Cu2S, which easily form at high temperatures [20], were detected by XRD. However, XRD alone is incapable of identifying small amounts of secondary phases because of its detection limits. To complement this method, we also performed Raman measurements to confirm the absence of secondary phases. Raman spectra of the precursor and annealed CZTS thin films are shown in Figure 8. The lower spectrum shows the annealed CZTS film with peak fitting by a Lorentzian curve. According to the figure, the precursor showed one peak at 330 cm −1 , corresponding to the A mode of kesterite CZTS. The annealed film exhibited a typical Raman spectrum of kesterite CZTS films with three peaks at 285, 330, and 369 cm −1 , corresponding to the two A symmetry modes and a B symmetry mode of the CZTS kesterite structure, respectively [21,22]. This result also indicated that no secondary phases are observed after the annealing process. The annealed CZTS films were used to fabricate complete solar cell structures. Solar cell performance was evaluated under standard conditions. The conversion efficiency of three cells on the same sample was measured as shown in Table 2. The solar cell ranged from 2.5% to 6.2%, indicating ununiform solar cell performance due to the poor film quality as shown in Figure 7. The annealed CZTS films were used to fabricate complete solar cell structures. Solar cell performance was evaluated under standard conditions. The conversion efficiency of three cells on the same sample was measured as shown in Table 2. The solar cell ranged from 2.5% to 6.2%, indicating ununiform solar cell performance due to the poor film quality as shown in Figure 7. Figure 10 shows the external quantum efficiency (EQE) curve of the CZTS solar cell. Over the visible range of the solar spectrum, the maximum QE was less than 60%, indicating strong recombination. The QE curve decreased sharply in the infrared region at 770 nm, which is the CZTS absorption edge. Thus, the calculated bandgap of the CZTS films was approximately 1.61 eV. The features near 510 nm and 380 nm correspond to the absorption edges of the CdS and ZnO layers [23,24], which are commonly used CdS buffer and ZnO window layers. recombination. The QE curve decreased sharply in the infrared region at 770 nm, which is the CZTS absorption edge. Thus, the calculated bandgap of the CZTS films was approximately 1.61 eV. The features near 510 nm and 380 nm correspond to the absorption edges of the CdS and ZnO layers [23,24], which are commonly used CdS buffer and ZnO window layers. On the basis of the EQE data of a solar cell, Jsc was calculated as [25] = ( ) ( , ) where, q is the elementary charge, QE is the quantum efficiency, and bs is solar flux or irradiation. For an air mass of 1.5, the data is available from Ref. [26]. On the basis of Equation (3), Figure 10, and the solar irradiation spectrum, Jsc of the CZTS solar cells was calculated to be 14.2 mA/cm 2 , because the J-V curve represents the real performance of a photovoltaic device. The slight deviation of Jsc calculated from the QE curve can be explained by the fact that the QE measurement is performed at a single wavelength with a much lower intensity than one-sun irradiation.
Conclusions
We synthesized a CZTS nanoparticle ink by a wet ball milling method together with centrifugation treatments based on only nontoxic solvents. The ink was then used to deposit CZTS precursor films by a spin-coating method, which led to extremely flat surfaces with high-uniformity. The precursor was annealed at a high temperature of 600 °C under a sulfur atmosphere and the grain size increased to approximately 1 μm from the original size of less than 100 nm. Both the composition and crystallinity of the CZTS film were markedly improved by annealing. The absence of secondary phase formation during the annealing process was confirmed by XRD and Raman analysis. A solar cell efficiency of 6.2% (Voc = 633.3 mV, Jsc = 17.6 mA/cm 2 , and FF = 55.8%) with an area of 0.2 cm 2 was achieved using annealed CZTS film as the light absorbing layer. To improve solar cell performance, it is necessary to increase grain size, improve crystallinity, and reduce defects in the film. Because the fabrication process of CZTS features a complex growth mechanism, the formation of secondary phases should be checked to confirm film quality, which directly affects solar cell performance.
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: (a) Surface and (b) cross-section of an annealed CZTS film using centrifugation condition: 6000 rpm for 20 min. (c) Surface Morphology of an annealed CZTS film fabricated with CZTS ink using centrifugation condition: 1500 rpm for 10 min. The annealing was conducted at a temperature of 600 °C in S rich atmosphere, Figure S2: J-V curve of CZTS solar cells for (a) centrifugation 6000 rpm for 20 min; (b) 1500 rpm for 10 min, Table S1: Solar cell performance of CZTS solar cells.
Author Contributions: Conceptualization, X.Z., part of the characterization: E.F.; funding acquisition, Y.W.; draft review and editing, C. Z. On the basis of the EQE data of a solar cell, J sc was calculated as [25] J sc = q ∞ 0 QE(E)b s (E, T s )dE (3) where, q is the elementary charge, QE is the quantum efficiency, and b s is solar flux or irradiation. For an air mass of 1.5, the data is available from Ref. [26]. On the basis of Equation (3), Figure 10, and the solar irradiation spectrum, J sc of the CZTS solar cells was calculated to be 14.2 mA/cm 2 , because the J-V curve represents the real performance of a photovoltaic device. The slight deviation of J sc calculated from the QE curve can be explained by the fact that the QE measurement is performed at a single wavelength with a much lower intensity than one-sun irradiation.
Conclusions
We synthesized a CZTS nanoparticle ink by a wet ball milling method together with centrifugation treatments based on only nontoxic solvents. The ink was then used to deposit CZTS precursor films by a spin-coating method, which led to extremely flat surfaces with high-uniformity. The precursor was annealed at a high temperature of 600 • C under a sulfur atmosphere and the grain size increased to approximately 1 µm from the original size of less than 100 nm. Both the composition and crystallinity of the CZTS film were markedly improved by annealing. The absence of secondary phase formation during the annealing process was confirmed by XRD and Raman analysis. A solar cell efficiency of 6.2% (V oc = 633.3 mV, J sc = 17.6 mA/cm 2 , and FF = 55.8%) with an area of 0.2 cm 2 was achieved using annealed CZTS film as the light absorbing layer. To improve solar cell performance, it is necessary to increase grain size, improve crystallinity, and reduce defects in the film. Because the fabrication process of CZTS features a complex growth mechanism, the formation of secondary phases should be checked to confirm film quality, which directly affects solar cell performance.
Supplementary Materials: The following are available online at http://www.mdpi.com/2079-4991/9/3/336/s1, Figure S1: (a) Surface and (b) cross-section of an annealed CZTS film using centrifugation condition: 6000 rpm for 20 min. (c) Surface Morphology of an annealed CZTS film fabricated with CZTS ink using centrifugation condition: 1500 rpm for 10 min. The annealing was conducted at a temperature of 600 • C in S rich atmosphere, Figure S2: J-V curve of CZTS solar cells for (a) centrifugation 6000 rpm for 20 min; (b) 1500 rpm for 10 min, Table S1: Solar cell performance of CZTS solar cells. | 2019-03-11T17:24:13.014Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "0383043837dff6f03edac95445b687ec52dfa403",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/9/3/336/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0383043837dff6f03edac95445b687ec52dfa403",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
108624441 | pes2o/s2orc | v3-fos-license | Miniatured Phase Shifter
RF MEMS switch has vast applications in various areas like telecommunication for satellite communication. RF MEMS switches1 have a number of advantages over PIN diode and FET switches in basis of low insertion loss and high isolation. Because of several advantages mentioned above, RF MEMS switch is always in demand. This paper takes a RF MEMS switch1 with a new cantilever beam2 to analyse the mechanical characteristics3 of the RF MEMS switch. In this paper application of RF MEMS series4 is shown which is used in the phase shifter design. Phase shifters are used for several years and have a usual application in phased array antenna systems. The combination of the two technologies is the purpose for this project. Furthermore, a phase shift of 11.25, 22.5, 45, 90, and 180 is needed, which requires a 5-bit phase shifter. The five-bit RF MEMS phase shifter consists of five different phase shifts5 such as 11.25, 22.5, 45, 90 and 180 kept in a similar arrangement. The phase shifters consist of stubs like structure designed in the ground of CPW and MEMS capacitive series switches Microwave and micrometer phase shifters6 are used in phased-array antennas for telecommunication and radar applications. Solid-state phase shifters based on p-i-n diodes or FieldEffect Transistor (FET) switches are used because they provide a good planar solution at a variety of microwave frequency ranges. Micro Electro Mechanical Systems (MEMS) phase shifters are progress today are mainly have established designs because of which the solid-state switch is replaced by a MEMS switch. MEMS-switch7 based phase shifters have some advantages like low-loss and wideband performance. Single-pole double throw MEMS switches have good performance are usually used Abstract
Introduction
RF MEMS switch has vast applications in various areas like telecommunication for satellite communication. RF MEMS switches1 have a number of advantages over PIN diode and FET switches in basis of low insertion loss and high isolation. Because of several advantages mentioned above, RF MEMS switch is always in demand. This paper takes a RF MEMS switch 1 with a new cantilever beam 2 to analyse the mechanical characteristics 3 of the RF MEMS switch. In this paper application of RF MEMS series 4 is shown which is used in the phase shifter design. Phase shifters are used for several years and have a usual application in phased array antenna systems. The combination of the two technologies is the purpose for this project. Furthermore, a phase shift of 11.25, 22.5, 45, 90, and 180 is needed, which requires a 5-bit phase shifter. The five-bit RF MEMS phase shifter consists of five different phase shifts 5 such as 11.25, 22.5, 45, 90 and 180 kept in a similar arrangement. The phase shifters consist of stubs like structure designed in the ground of CPW and MEMS capacitive series switches Microwave and micrometer phase shifters 6 are used in phased-array antennas for telecommunication and radar applications. Solid-state phase shifters based on p-i-n diodes or Field-Effect Transistor (FET) switches are used because they provide a good planar solution at a variety of microwave frequency ranges. Micro Electro Mechanical Systems (MEMS) phase shifters are progress today are mainly have established designs because of which the solid-state switch is replaced by a MEMS switch. MEMS-switch 7 based phase shifters have some advantages like low-loss and wideband performance. Single-pole double throw MEMS switches have good performance are usually used in MEMS phase shifter applications, especially because most of them i.e. MEMS phase shifters are generally is a switched-line type phase shifters 8 . In this research, the novel cantilever compact switches are being used as building blocks in the designing of MEMS phase shifter 10 . Below Figure 1 shows the switched line phase shifter 12 using RF MEMS series switch 1 . When the transmission lines are acting as a TEM (quasi-TEM, such as CPW), this phase shift becomes a linear function of frequency, which gives a true time-delay between the input and output ports. The need for low-loss phase shifters are increasing day by day, so does the Radio Frequency (RF) MEMS as a solution to provide them. Approach is shown to in build a low loss phase shifters using RF MEMS series switches. By using a switched line 13 5-bit resonating phase shifter, an average insertion loss was attaining with better return loss. A novel cantilever beam is designed having low actuation voltage and better switching time to be used in the switched line phase shifter. To our acquaintance, these devices represent the lowest loss phase shifter There are different phase shifter just like switched line phase shifter 16 such as DMTL it's mainly have Shunt switches but the main reason because number of shunt switches is more and complexity of the design increases and it is very difficult to get multi bit phase shift in DMTL because of which use switched line phase shifter. It's easy to design a miniature phase shifter.
Design Description
The switch is design on CPW platform with dimension G/S/G=60/110/60 µm for 20 GHz -dc in Figure 2. A gold material is used for making the cantilever beam, The substrate is made of silicon material. The dimple is shown is mainly. A normal CPW 1 on a dielectric substrate contain a centre strip conductor with semi-infinite ground planes on an either side of the structure as it help quasi-TEM mode of propagation. The CPW offers many advantages over a conventional microstrip line, first it simplifies fabrication, and second it aid easy shunt as well as series surface mounting of active and passive devices. Our switch is a CPW series switch, consist of a cantilever beam 4 having holes in it. The CPW line width should be greater than the signal width as is displayed in Figure 2. Substrate thickness plays less important role due to the fact the fields are concentrated on the slots. It generates elliptically polarised magnetic fields and all conductors are connected in the same plane.
DGS Structure
Defected Ground Structures (DGS) 12 are shown much potential for contrivance in different applications. They offer a sharp, electromagnetic band-gap and show slow wave factor, which help to make lesser size circuits. In these structures, well-defined shapes are made. Coplanar Waveguides (CPW) have both signal and ground on the same surface. Though they preoccupy larger area than microstrip lines, they can be considered as a good supplement for DGS. The proposed structure has the opportune of having an almost constant capacitance while the inductance varies linearly as the number of cell increases, which simplifies the design process. DGS (defected ground structure is a periodic or non-periodic etched from the ground of a planar transmission line like microstrip, coplanar and conductor backed coplanar waveguide DGS 14 mainly effect the shield current distribution in the ground plane and cause the defect. This causes a disturbance in the ground plane which can rise the effective capacitance and inductance. Periodic structure in DGS is applied in a planar transmission lines and have a wide interest for their broad application in microwave circuits. Transmission lines with an annually structures show pass filter and rejection band as low pass filters. The properties of periodic structure is slow wave effect. Periodic DGS consider the shape of the unit cell in DGS and distance between two unit cells.
Phase Shifter Description
In terms Microwave and micrometer phase shifters 4 are used in phased-array antennas for telecommunication and radar applications. The phase shifters that are based on p-i-n diodes and Field-Effect Transistor (FET) switches are used because they provide a good planar solution at a variety of microwave frequency ranges. Maximum Microelectromechanical Systems (MEMS) phase shifters are developed today are mainly have established designs in which the solid-state switch is replaced by a MEMS switch. MEMS-switch based phase shifters 8 have some advantages like low-loss and wideband performance. Single-pole double throw MEMS switches have good performance and are usually used in MEMS phase shifter applications, especially because most of them are switchedline type phase shifters The above Figure 3 shows how the cantilever is designed and used as a phase shifter. Four cantilever beam 4 is used for each phase shift. Single-pole double-throw switches is realized by differential phase shift 10 of a reference transmission lines which is related to the change in length between the reference and the delay path, which is shown in the below equation. Where ▲φ is the phase difference, ▲is the wavelength and l1 and l2 is the line length of the reference path and the delay path.
Pull in Voltage
The actuation voltage can also be calculated by following formulas also F=ma Where F is the force applied in the beam, m is the mass of the cantilever beam, K is the spring constant value ▲x is the defection of the beam, a is the acceleration which is 9.8m/s 2 , l is the length of the beam, w is the width of the beam and t is the thickness of the beam. The mass here is calculated by multiplication of length, width and thickness of the beam. By this calculate the spring constant of the beam and this spring constant value will put in the pull in voltage and calculate the actuation voltage of the series switch. Vp is the pull in voltage of the switch
Stress Analysis
The stress condition is due to the different conditions that are encountered by the lower and upper layers of a uniform beam. In most of the design, it is important to reduce the stress factor 5 because it shows the positive and negative beam curvature.
Where I is the moment of inertia of a rectangular beam (I=wt3/12).The stress difference between the upper and bottom layers must be kept less than 5MPa.
Switching Time
The switching time depends strictly on the applied voltage since the stronger the electrostatic force and larger will be the voltage. The solution follow.
The applied voltage should lie from 1.3-1.4Vp to result in a fast switching time at a sensible voltage level.
Cantilever HFSS Result
Below Figure 4 shows the design of cantilever beam series switch. The simulation of switch for taking the RF characteristic is done in HFSS software. The series switch must got the return loss below 10dB and the insertion loss should lie between 0.1-0.3 dB from the above Figure 5 can see that at 20GHz it is showing insertion loss as 0.1dB at 20GHz and 19dB return loss at 20GHz. Figure 8 shows the stress factor 6 of the switch. Stress is so low because of the holes in the cantilever beam design. In this design a stress factor 7 in Figure 9 of 0.96 MPa and after giving holes the area decreases so the stress of the switch increases to 1.96 Mpa. Below Figure 10 shows the stress factor when holes are given.
Phase Shifter Result
The switched line10 mainly used for miniaturization. It can give 5 different phase just in one design The above Figure 1 shows the switched line phase shifter design.
Here will generally discuss about the phase shift i.e. 11.25 degree, 22.5 degree, 45 degree, 90 degree and 180 degree.
Phase Shift 11.25
The formula discuss in previous section is used for making the stub like structure for the reference path 11 and the delay path. The below Figure 10 shows the first bit having the length as 124.6µm and delay path is124.5µm so will get the phase shift of 11.25 degree are getting phase of 35.1 degree at 10GHz. Now will see the RF characteristics at 10 GHz. Getting 11dB return loss and 1.1dB insertion loss. Now will find the phase error. So will subtract the phase shift which is achieved in CPW line and the present one. From the above Table 1 are getting the phase error as 4.59 degree.
Phase Shift 22.5
The below Figure 11 shows 13dB return loss and 0.8dB insertion loss. The below Figure 12 shows a phase of 27.66 degree so the total phase shift is obtained by subtracting by the normal CPW line and the phase shift obtained is shown in Table 2. The phase error is obtained by subtracting measured phase shift and the actual phase shift what to be obtained. To find the 22.5 degree phase the second bit reference line and the delay line is ON and the rest of switches are kept close. For the 45 degree will switch on the reference path and the delay path, the below Figure 15 shows the phase shift. For 45 degree phase shift: The above Figure 13 shows the how the 45 degree shift works. The below Figure 14 and Figure 15 shows the RF characteristics and phase shift. The below Table 3
90 Degree Phase Shift
For the 90 degree phase the fourth bit is switched on and making all the other switches off and this will give the phase shift For 90 degree phase shift: The above Figure 16 shows the working principle of 90 degree phase shift. The below Figure 16 shows the RF characteristics and the phase of the design also phase error of the design. The above Figure 17 shows the return loss of 20dB and insertion loss of 1.4dB at 10GHz frequency. The below Figure 18 shows the phase of 39.96 degree at 10GHz frequency. This Phase is added to the CPW line and the total phase shift and the phase error is calculated that is shown in Table 4.
180 Degree Phase Shift
For 180 degree phase shift Figure 19 shows the RF characteristics. Phase shift and phase error are shown in Table 5. Figure 19 shows at 10GHz return loss is 16 dB and insertion loss is 0.5 db. The Table 5 also shows the error as 5.59 degree which is obtained by adding the obtained phase shift from the CPW line and the measured phase shift when the cantilever switch is ON at 180 degree.
Conclusion
In this paper RF MEMS switches join the assistance of traditionally used electromechanical switches having low insertion loss and high isolation with some major advantage of solid-state switches having low power-consumption. Single-pole Single-Throw (SPST) DC-contact switches have been designed at a frequency 20GHz with a return loss of 19 db. However, previous work on compact MEMS series switches has mainly focused on cantilever based on metal to metal contact designs. In order to make the cantilever beam indifferent to stress factor, relatively thick beams are used. However, such a thick beam is very stiff and difficult to be pulled down. With a new design, this paper focused on the development of metal-to metal contact DC-contact switches using thin cantilevers to achieve high return loss and very low actuation voltage simultaneously. It has shown good results up to 40 GHz and can ultimately contribute to other application in higher frequency performance. The tip deflection is well controlled at a gap of 2.5µm and a single-pole-singlethrow switches is designed up to 20 GHz with insertion loss 0.1 dB and isolation better than 19 dB; 3) using these switches to build switched line phase shifters up to 10 GHz. RF characteristics of phase shifters up to 10 GHz are presented. The 5 bit phase shifter (11.25°, 22.5°, 45°, 90°, 180°) shows the capability of achieve desired phase change at 20 GHz. | 2019-04-12T13:55:25.929Z | 2015-08-09T00:00:00.000 | {
"year": 2015,
"sha1": "27cef81ca696b0dfa7d47e9b140b9410f09bb4bb",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2015/v8i19/76867",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e1f086ce5a16443008281e6eb7357384554a4906",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233664156 | pes2o/s2orc | v3-fos-license | Esthetic Dentistry: A case report on metal core concealment approach for endodontically treated maxillary anterior tooth
Aim: To fulfil esthetic restoration demand of a badly mutilated maxillary anterior tooth. Background: The life span of endodontically treated teeth has been extraordinarily upgraded by proceeding with improvements made in endodontic treatment and restorative systems. It has been accounted for that an enormous number of endodontically treated teeth are re-established to their unique function with the utilization of intra-radicular instruments and restorative materials. Over the most recent couple of many years, numerous pre-fabricated post and core systems have been developed. To mask the greyish appearance of oxidized subsurface of metal, an opaque layer of porcelain is required once the casting is done, scilicet is reflective eminently. The biggest advantage is gaining the supremacy of both metal and ceramic in terms of strength and aesthetic, both altogether. Case Description: This article reports a case where endodontic retreatment followed by restoration with a ceramic coated cast post was carried out mainly due to the aesthetic priority of the patient. Conclusion: Ceramic coping is an adjunct treatment modality to hide the blackish hue of cast metal post and core when aesthetic need is a concern. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Introduction
While picking a choice for a suitable post-endodontic restorative material, grossly decayed tooth having minimal coronal portion left either as a result of previous huge reclamation or a broad access opening while struggling in locating the orifice, often times present with a number of challenges. 1 Clinicians consistently face the situation whether to have a thing about selecting a post. Again, there is disarray whether to go for custom cast post or pre-fabricated post. As indicated by Franklin Weine, the greater part of the root canal treated teeth regularly waned after endodontic treatment attributable to hopeless post endodontic reclamation instead of essential endodontic reason. 2 While selecting a suitable post-endodontic remedial material, several conditions must be remembered including extent of coronal structure remaining, utilitarian necessities of the tooth, aesthetic requirements, age and the periodontal status of the patient. 3 In the only remaining century, custom made cast metal post spoke to the technique for decision in the way of thinking of recreating the endodontically treated tooth. The system normally included the utilization of porcelain combined to metal crown that means cast posts are used in concurrence with porcelain-fused to metal (PFM) summit. In general, admissible aesthetic outcome is provided by fusing the core of cast metal post and core system with ceramic.
To mask the greyish appearance of oxidized subsurface of metal, an opaque layer of porcelain is required once the casting is done, scilicet is reflective eminently. Anyway, light could enter through any aspect of the natural teeth and its surrounding tissues, so this reflection isn't found in normal teeth. The biggest advantage is gaining the supremacy of both metal and ceramic in terms of strength and aesthetic, both altogether. 4 This paper reports a case where endodontic retreatment followed by restoration with a ceramic coated cast post was carried out mainly due to the aesthetic priority of the patient.
Case Description
A 25 years old female patient reported in the Department of Conservative Dentistry and Endodontics with the chief complaint of decayed and discoloured teeth in her upper front tooth region for 2 years. Patient wanted rectification in her aesthetics. Past dental history of the patient revealed that she underwent root canal treatment for the concerned tooth 3 years back. Medical, social and personal history of the patient was found to be inconsequential. The discoveries of the extra-oral examination were all inside ordinary cut off points. Periodontal assessment uncovered great oral cleanliness and gingival health during examination.
Intraoral examination showed grossly decayed and discoloured maxillary left canine#23. (Figure 1) Radiographic examination confirmed previous root canal treatment and deficient obturation in the concerned tooth #23. (Figure 2) Patient's maxillary anterior teeth were labially proclined and deep bite malocclusion was present. Patient was advised to go for orthodontic correction but she decided to postpone it due to money restraint. Root canal retreatment followed by post endodontic restoration with porcelain masked cast metal post and core and all ceramic crown prostheses was planned. Treatment plan was discussed with the patient and informed consent was taken. Carious tooth structure was removed, previous gutta percha cones were retrieved from the canal and working length was confirmed using 20k file. Biomechanical preparation of the canal was completed using K files and H files following the step back technique. 3% sodium hypochlorite, 17% ethylene diaminetetraacetic acid (EDTA) and normal saline were used for irrigating the canals after each file. Sectional obturation was done till 6mm of the apical third of the root length. Canal was obturated using the lateral condensation technique and after the obturation, canal was sealed with temporary restorative material (3M Espe Cavit -G). During the next appointment, making the use of the peeso reamer up to number 4 to a depth of 15.5 mm, the root diameter was enlarged with the control of rubber stoppers that were adapted to the reamers. (Figure 3) A cervical ferrule preparation was done with dimension of 2 mm height, 1mm width and 2-4 degree taper. Using the direct technique, the post and core wax pattern was fabricated ( Figure 4) and rubber base impression material (Zhermack Zetaplus) was used to make the impression for laboratory work. Metal casting was done with cast metal alloy (79.3% copper, 7.8% aluminium, 4.3% nickel). ( Figure 5) and an opaque porcelain layer was applied on the core portion. (Figure 6) The luting of custom made ceramic infused post and core was done with suitable luting cement. (Figure 7) Prostheses planning were postponed till the orthodontic treatment of the patient continued.
Discussion
Esthetic reestablishments of upper foremost teeth are constantly viewed as a significant challenge. The complexity of every esthetic case is relatively expanded on the grounds that several dental orders are involved in the administration of a deteriorated smile. 5 The life span of endodontically treated teeth has been extraordinarily upgraded by proceeding with improvements made in endodontic treatment and restorative systems. It has been accounted for that an enormous number of endodontically treated teeth are re-established to their unique function with the utilization of intra-radicular instruments and restorative materials. These materials change from a traditional customised cast post and core to one visit strategies, utilizing pre-fabricated post and core systems. Over the most recent couple of many years, numerous pre-fabricated post and core systems have been developed. The choice of post configuration is significant, in light of the fact that it might have an effect on the life span of the tooth. 6 A huge aspect of the writing looked into underscores the pressure appropriation during post insertion and during masticatory function. Various elements including length, diameter, material, flexibility, configuration, biocompatibility of post, measure of residual dentin, luting cement, treatment plan and forces acting on restored tooth, are likewise found to impact the fracture resistance of a recreated tooth. Of the apparent multitude of variables counted, core design, occlusal loads, and role of the treated tooth in various functions are found to have direct effect on the life span of the re-established tooth. 7 Customarily, titanium, carbon, polyethylene fiber, and tempered steel posts are utilized for the front region. [8][9][10] Nonetheless, when all-ceramic rebuilding efforts are liked, metal posts may contrarily influence the tasteful results. 11 To beat the detriments of metallic posts, a wide scope of esthetic posts have gotten industrially accessible, for example fiber reinforced composite resin posts (FRC) and yttrium stabilized zirconia-based ceramic posts. 12 Past investigations have discovered that the basic tooth structure impacts the presence of the ceramic restoration. 13,14 Appropriately, a clinician ought to think about this issue when treating such cases. The shade of the substrate impacts the final debut of the ceramic restoration. Clinical circumstances of stained teeth or dim hued abutments can be covered with a ceramic layer that will improve the result of the ultimate restoration on the top and give an incredible aesthetic result. 15 In the present case due to economical issue of the patient being a student, a cast metal post and core system was planned. Giving priority to the esthetic concern of the patient, the displeasing grayish hue of the metallic core was masked with the opaque porcelain coating.
The treatment of the upper foremost teeth could be convoluted by numerous elements. These variables incorporate, however are not restricted to, teeth shape and size discrepancy, old defective restorations, malalignment, and unpleasant gingival contour. 5 Therefore, an extensive and nitty gritty treatment plan is critical to distinguishing both esthetic and functional requirements with the treatment. The utilization of a waxup, comprehensive facial and dental esthetic investigations, and accomplished correspondence with the lab specialist are needed to execute the most promising esthetic results. 16
Conclusion
In the present era, the stipulation for all ceramic restoration is rising as a result of expanding esthetic concern among population. An underlying metallic core present with the greater obstruction in obtaining the ultimate esthetic upshot of all ceramic restoration. Accordingly, treating such cases requires modification to hide the metallic hue of custom made cast post and core framework. The present case report is such an approach using ceramic coping to obscure the unpleasant metallic hue to enhance the ultimate esthetic outcome of the treatment.
Acknowledgement
I would like to acknowledge my work to the entire Department of Conservative Dentistry and Endodontics, Saraswati Dental College and Hospital, Lucknow.
Source of Funding
No financial support was received for the work within this manuscript.
Conflict of Interests
The author declares that they do not have any conflict of interests. | 2021-04-16T22:30:00.195Z | 2021-03-28T00:00:00.000 | {
"year": 2021,
"sha1": "8f4a600b426c4eead20c25271aee2f0df3657e68",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijce.in/journal-article-file/13443",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f4a600b426c4eead20c25271aee2f0df3657e68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233210097 | pes2o/s2orc | v3-fos-license | How Asymmetry Helps Buffer Management: Achieving Optimal Tail Size in Cup Games
The cup game on $n$ cups is a multi-step game with two players, a filler and an emptier. At each step, the filler distributes $1$ unit of water among the cups, and then the emptier selects a single cup to remove (up to) $1$ unit of water from. There are several objective functions that the emptier might wish to minimize. One of the strongest guarantees would be to minimize tail size, which is defined to be the number of cups with fill $2$ or greater. A simple lower-bound construction shows that the optimal tail size for deterministic emptying algorithms is $\Theta(n)$, however. We present a simple randomized emptying algorithm that achieves tail size $\tilde{O}(\log n)$ with high probability in $n$ for $\operatorname{poly} n$ steps. Moreover, we show that this is tight up to doubly logarithmic factors. We also extend our results to the multi-processor cup game, achieving tail size $\tilde{O}(\log n + p)$ on $p$ processors with high probability in $n$. We show that the dependence on $p$ is near optimal for any emptying algorithm that achieves polynomial-bounded backlog. A natural question is whether our results can be extended to give unending guarantees, which apply to arbitrarily long games. We give a lower bound construction showing that no monotone memoryless emptying algorithm can achieve an unending guarantee on either tail size or the related objective function of backlog. On the other hand, we show that even a very small (i.e., $1 / \operatorname{poly} n$) amount of resource augmentation is sufficient to overcome this barrier.
Introduction
At the start of the cup game on n cups, there are n empty cups. In each step of the game, a filler distributes 1 unit of water among the cups, and then an emptier removes (up to) 1 unit of water from a single cup of its choice. The emptier aims to minimize some measure of "behind-ness" for cups in the system (e.g., the height of the fullest cup, or the number of cups above a certain height). If the emptier's algorithm is randomized, then the filler is an oblivious adversary , meaning it cannot adapt to the behavior of the emptier.
Bounds on backlog. Much of the work on cup games has focused on bounding the backlog of the system, which is defined to be the amount of water in the fullest cup.
Research on bounding backlog has spanned five decades [1, 6-8, 11, 18, 19, 23, 26, 28, 30-34, 36]. Much of the early work focused on the fixed-rate version of the game, in which the filler places a fixed amount of water f j into each cup j on every step [6-8, 23, 26, 30, 32-34, 36]; in this case constant backlog is achievable [7,33]. For the full version of the game, without fixed rates, constant backlog is not possible. In this case, the optimal deterministic emptying algorithm is known to be the greedy emptying algorithm, which always empties from the fullest cup, and which achieves backlog O(log n) [1,18]. If the emptier is permitted to use a randomized algorithm, then it can do much better, achieving an asymptotically optimal backlog of O(log log n) for poly n steps with high probability [11,19,28].
A strong guarantee: small tail size. The tail size of a cup game at each step is the number of cups containing at least some constant c amount of water. For the guarantees in this paper, c will be taken to be 2.
A guarantee of small tail size is particularly appealing for scheduling applications, where cups represent tasks and water represents work that arrives to the tasks over time. Whereas a bound of b on backlog guarantees that the furthest behind worker is only behind by at most b, it says nothing about the number of workers that are behind by b. In contrast, a small bound on tail size ensures that almost no workers are behind by more than O (1).
The main result in this paper is a randomized emptying algorithm that achieves tail size O(log n log log n). The algorithm also simultaneously optimizes backlog, keeping the maximum height at O(log log n). As a result the total amount of water above height 2 in the system is O(log n) with high probability. In constrast, the best possible deterministic emptying algorithm allows for up to n 1−ε cups to all have fills Ω(log n) at once (see the lower-bound constructions discussed in [19] and [12]).
The problems of optimizing tail size and backlog are closely related to the problem of optimizing the c-shifted ℓ p norm of the cup game. Formally, the c-shifted ℓ p norm is given by where f i is the fill of cup i. 1 The problem of bounding backlog corresponds to the problem of optimizing the ℓ ∞ norm of the cup game, and the problem of bounding tail size corresponds to Intuitively, the asymmetric treatment of cups ensures that there is a large collection (of size roughly n/2) of randomly selected cups that are "almost always" empty. The fact that the emptier doesn't know which cups these are then implies the unpredictability guarantee. Proving this intuition remains highly nontrivial, however, and requires several new combinatorial ideas.
Multi-processor guarantees. The cup game captures a scheduling problem in which a single processor must pick one of n tasks to make progress on in each time step. The multi-processor version of this scheduling problem is captured by the p-processor cup game [1,7,11,28,31,33]. In each step of the p-processor cup game, the filler distributes p units of water among cups, and the emptier removes 1 unit of water from (up to) p different cups. Because the emptier can remove at most 1 unit of water from each cup at each step, an analogous constraint is also placed on the filler, requiring that it places at most 1 unit of water into each cup at each step.
A key feature of the p-processor cup game is that the emptier is required to remove water from p distinct cups in each step, even if the vast majority of water is contained in fewer than p cups.
Until recently, establishing any nontrivial bounds on backlog in the multi-processor cup game remained an open problem, even with the help of resource augmentation. Recent work by Bender et al. [11] (using resource augmentation) and then by Kuszmaul [28] (without resource augmentation) established bounds on backlog closely matching those for the single-processor game.
By extending our techniques to the multi-processor setting, we construct a randomized emptying algorithm achieves tail sizeÕ(log n + p) with high probability in n after each of the first poly n steps of a p-processor cup game. Moreover, we show that the dependence on p is near optimal for any backlog-bounded algorithm (i.e., any algorithm that achieves backlog poly n or smaller).
Lower bounds against unending guarantees. In the presence of resource augmentation ε = 1/ polylog n, the smoothed greedy emptying algorithm is known to provide an unending guarantee [11], meaning that the high-probability bounds on backlog and tail size continue to hold even for arbitrarily large steps t.
A natural question is whether unending guarantees can also be achieved without the use of resource augmentation. It was previously shown that, when p ≥ 2, the smoothed greedy algorithm does not offer unending guarantees [28]. Analyzing the single-processor game has remained an open question, however.
We give a lower bound construction showing that neither the smoothed greedy algorithm nor the asymmetric smoothed greedy algorithm offer unending guarantees for the single-processor cup game without the use of resource augmentation. Even though resource augmentation ε > 0 is needed for the algorithms to achieve unending guarantees, we show that the amount of resource augmentation required is very small. Namely, ε = 1/2 polylog n is both sufficient and necessary for the asymmetric smoothed greedy algorithm to offer unending guarantees on both tail size and backlog.
We generalize our lower-bound construction to work against any emptying algorithm that is both monotone and memoryless, including emptying algorithms that are equipped with a clock. We show that no such emptying algorithm can offer an unending guarantee of o(log n) backlog in the single-processor cup game, and that any unending guarantee of polylog n tail size must come of the cost of polynomial backlog.
We call the filling strategy in our lower bound construction the fuzzing algorithm . The fuzzing algorithm takes a very simple approach: it randomly places water into a pool of cups, and shrinks that pool of cups very slowly over time. The fact that gradually shrinking random noise represents a worst-case workload for cup games suggests that real-world applications of cup games (e.g., processor scheduling, network-switch buffer management, etc.) may be at risk of experiencing "aging" over time, with the performance of the system degrading due to the impossibility of strong unending guarantees.
Related work on other variants of cup games. Extensive work has also been performed on other variants of the cup game. Bar-Noy et al. [5] studied the backlog for a variant of the single-processor cup game in which the filler can place arbitrarily large integer amounts of water into cups at each step. Rather than directly bounding the backlog, which would be impossible, they show that the greedy emptying algorithm achieves competitive ratio O(log n), and that this is optimal for both deterministic and randomized online emptying algorithms. Subsequent work has also considered weaker adversaries [17,21].
Several papers have also explored variants of cup games in which cups are connected by edges in a graph, and in which the emptier is constrained by the structure of the graph [12][13][14][15]. This setting models multi-processor scheduling with conflicts between tasks [14,15] and some problems in sensor radio networks [12].
Another recent line of work is that by Kuszmaul and Westover [29], which considers a variant of the p-processor cup game in which the filler is permitted to change the value of p over time.
Remarkably, the optimal backlog in this game is significantly worse than in the standard game, and is Θ(n) for an (adaptive) filler.
Cup games have also been used to model memory-access heuristics in databases [9]. Here, the emptier is allowed to completely empty a cup at each step, but the water from that cup is then "recycled" among the cups according to some probability distribution. The emptier's goal is achieve a large recycling rate, which is the average amount of water recycled in each step.
Closely related to the study of cup games is the problem of load balancing , in which one must assign balls to bins in order to minimize the number of balls in the fullest bin. In the classic load balancing problem, n balls arrive over time, and each ball comes with a selection of d random bins (out of n bins) in which it can potentially be placed. The load balancing algorithm gets to select which of the d bins to place the ball in, and can, for example, always select the bin with the fewest balls. But what should the algorithm do when choosing between bins that have the same number of balls? In this case, Vöcking famously showed that the algorthm should always break ties in the same direction [40], and that this actually results in an asymptotically better bound on load than if one breaks ties arbitrarily. Interestingly, one can think of the asymmetry used in Vöcking's algorithm for load balancing as being analogous to the asymmetry used in our algorithm for the cup game: in both cases, the algorithm always breaks ties in the same random direction, although in our result, the way that one should define a "tie" is slightly nonobvious. In the case of Vöcking's result, the asymmetry is known to be necessary in order to get an optimal algorithm [40]; it remains an open question whether the same is true for the problem of bounding tail size in cup games.
Related work on the roles of backlog and tail size in data structures.
Bounds on backlog have been used extensively in data-structure deamortization [2, 3, 18-20, 25, 27, 37], where the scheduling decisions by the emptier are used to decide how a data structure should distribute its work.
Until recently, the applications focused primarily on in-memory data structures, since externalmemory data structures often cannot afford the cost of a buffer overflowing by an ω(1) factor. Recent work shows how to use bounds in tail-size in order to solve this problem, and presents a new technique for applying cup games to external-memory data structures [10]. A key insight is that if a cup game has small tail size, then the water in "overflowed cups" (i.e., cups with fill more than O(1)) can be stored in a small in-memory cache. The result is that every cup consumes exactly Θ(1) blocks in external memory, meaning that each cup can be read/modified by the data structure in O(1) I/Os. This insight was recently applied to external-memory dictionaries in order to eliminate flushing cascades in write optimized data structures [10].
Outline. The paper is structured as follows. Section 2 describes a new randomized algorithm that achieves small tail size without resource augmentation. Section 3 gives a technical overview of the algorithm's analysis and of the other results in this paper. Section 4 then presents the full analysis of the algorithm and Section 5 presents (nearly) matching lower bounds. Finally, Section 6 gives lower bounds against unending guarantees and analyzes the amount of resource augmentation needed for such guarantees.
Conventions. Although in principle an arbitrary constraint height c can be used to determine which cups contribute to the tail size, all of the algorithms in this paper work with c = 2. Thus, throughout the rest of the paper, we define the tail size to be the number of cups with height 2 or greater.
As a convention, we say that an event occurs with high probability in n, if the event occurs with probability at least 1 − 1 n c for an arbitrarily large constant c of our choice. The constant c is allowed to affect other constants in the statement. For example, an algorithm that achieves tail size c log n with probability 1 n c is said to achieve tail size O(log n) with high probability in n.
The Asymmetric Smoothed Greedy Algorithm
Past work on randomized emptying algorithms has focused on analyzing the smoothed greedy algorithm [11,28]. The algorithm begins by randomly perturbing the starting state of the system: the emptier places a random offset r j of water into each cup j, where the r j 's are selected independently and uniformly from [0, 1). The emptier then follows a greedy emptying algorithm, removing water from the fullest cup at each step. If the fullest cup contains fill less than 1, however, then the emptier skips its turn. This ensures that the fractional amount of water in each cup j (i.e., the amount of water modulo 1) is permanently randomized by the initial offset r j . The randomization of the fractional amounts of water in each cup has been critical to past randomized analyses [11,28], and continues to play an important (although perhaps less central) role in this paper. This paper introduces a new variant of the smoothed greedy algorithm that we call the asymmetric smoothed greedy algorithm . The algorithm assigns a random priorities p j ∈ [0, 1) to each cup j (at the beginning of the game) and uses these to "break ties" when cups contain relatively small amounts of water. Interestingly, by always breaking these ties in the same direction, we change the dynamics of the game in a way that allows for new analysis techniques. We describe the algorithm in detail below.
Algorithm description.
At the beginning of the game, the emptier selects random offsets r j ∈ [0, 1) independently and uniformly at random for each cup j. Prior to the game beginning, r j units of water are placed in each cup j. This water is for "bookkeeping" purposes only, and need not physically exist. During initialization, the emptier also assigns a random priority p j ∈ [0, 1) independently and uniformly at random to each cup j.
After each step t, the emptier selects (up to) p different cups to remove 1 unit of water from as follows. If there are p or more cups containing 2 or more units of water, then the emptier selects the p fullest such cups. Otherwise, the emptier selects all of the cups containing 2 or more units of water, and then resorts to cups containing fill in [1,2), choosing between these cups based on their priorities p j (i.e., choosing cups with larger priorities over those with smaller priorities). The emptier never removes water from any cup containing less than 1 unit of water.
Threshold crossings and a threshold queue. When discussing the algorithm, several additional definitions and conventions are useful. We say that threshold (j, i) is crossed if cup j contains at least i units of water for positive integer i. When i = 1, the threshold (j, i) is called a light threshold , and otherwise (j, i) is called a heavy threshold . One interpretation of the emptying algorithm is that there is a queue Q of thresholds (j, i) that are currently crossed. Whenever the filler places water into cups, this may add thresholds (j, i) to the queue. And whenever the emptier removes water from some cup j, this removes some threshold (j, i) from the queue. When selecting thresholds to remove from the queue, the emptier prioritizes heavy thresholds over light ones. Within the heavy thresholds, the emptier prioritizes based on cup height, and within the light thresholds the emptier prioritizes based on cup priorities p j .
As a convention, we say that a cup j is queued if (j, 1) is in Q (or, equivalently, if (j, i) is in the queue for any i). The emptier is said to dequeue cup j whenever threshold (j, 1) is removed from the queue. The size of the queue Q refers to the number of thresholds in the queue (rather than the number of cups).
Technical Overview
In this section we give an overview of the analysis techniques used in the paper. We begin by discussing the analysis of the asymmetric smoothed greedy algorithm. To start, we focus our analysis on the single-processor cup game, in which p = 1.
The unpredictability guarantee. At the heart of the analysis is what we call the unpredictability guarantee, which, roughly speaking, establishes that the filler cannot predict large sets of cups that will all be over-full at the same time as one-another. We show that if an algorithm satisfies a certain version of the unpredictability guarantee, along with certain natural "greedy-like" properties, then the algorithm is guaranteed to exhibit a small tail size.
Formally, we say that an emptying algorithm satisfies R-unpredictability at a step t if for any oblivious filling algorithm, and for any set of cups S whose size is a sufficiently large constant multiple of R, there is high probability in n that at least one cup in S has fill less than 1 after step t. In other words, for any polynomial f (n), there exists a constant c such that: for each set S ⊆ [n] of cR cups, the probability that every cup in S has height 1 or greater at step t is at most 1/f (n).
How R-unpredictability helps. Rather than proving that R-unpredictability causes the tail size to stay small, we instead show the contrapositive. Namely, we show that if there is a filling strategy that achieves a large tail size, the strategy can be adapted to instead violate R-unpredictability.
Suppose that the filler is able to achieve tail size cR at some step t, where c is a large constant. Then during each of the next cR steps, the emptier will remove water from cups containing fill 2 or more (here, we use the crucial fact that the emptier always prioritizes cups with fills 2 or greater over cups with fills smaller than 2). This means that, during steps t + 1, . . . , t + cR, the set of cups with fill 1 or greater is monotonically increasing. During these steps the filler can place 1 unit of water into each of the cups 1, 2, . . . , cR in order to ensure that these cups all contain fill 1 or greater after step t + cR. Thus the filler can transform the initial tail size of cR into a large set of cups S = {1, 2, . . . , cR} that all have fill 1 or greater. In other words, any filling strategy for achieving large tail size (at some step t) can be harnessed to violate R-unpredictability (at some later step t + cR).
The directness of the argument above may seem to suggest that in order to prove the Runpredictability, one must first (at least implicitly) prove a bound on tail size. A key insight in this paper is that the use of priorities in the asymmetric smoothed greedy algorithm allows for R-unpredictability to be analyzed as its own entity.
Our algorithm analysis establishes log n log log n-unpredictability for the first poly n steps of any cup game, with high probability in n. This, in turn, implies a bound of O(log n log log n) on tail size.
Establishing unpredictability. We prove that, out of the roughly n/2 cups j with priorities p j ≥ 1/2, at most O(log n log log n) of them are queued (i.e., contain fill 1 or greater) at a time, with high probability in n. Recall that the cups with priority p j ≥ 1/2 are prioritized by the asymmetric smoothed greedy algorithm when the algorithm is choosing between cups with fills in the interval [1,2). This preferential treatment does not extend the case where there are cups containing fill ≥ 2, however. Remarkably, the limited preferential treatment exhibited by the algorithm is enough to ensure that the number of queued high-priority cups never exceeds O(log n log log n).
The bound of O(log n log log n) on the number of queued cups with priorities ≥ 1/2 implies log n log log n-unpredictability as follows. For any fixed set S of cups, the number of cups j in S with priority p j ≥ 1/2 will be roughly |S|/2 with high probability in n. If |S|/2 is at least a sufficiently large constant multiple of log n log log n, then the number of cups with p j ≥ 1/2 in S exceeds the total number of cups with p j ≥ 1/2 that are queued. Thus S must contain at least one non-queued cup, as required for the unpredictability guarantee.
In order to bound the number of queued cups with priority p j ≥ 1/2 by O(log n log log n), we partition the cups into Θ(log log n) priority levels based on their priorities p j . Let q be a sufficiently large constant multiple of log log n. The priority level of a cup j is given by ⌊p j · q⌋ + 1. (Note that the priority levels are only needed in the analysis, and the algorithm does not have to know q.) We show that with high probability in n, there are never more than O(log n log log n) queued cups with priority level ≥ q/2. Note that, although we only care about bounding the number of queued whose priority-levels are in the top fifty percentile, our analysis will take advantage of the fact that the priorities p j are defined at a high granularity (rather than, for example, being boolean).
The stalled emptier problem. Bounding the number of queued cups with priority level greater than ℓ directly is difficult for the following reason: Over the course of a sequence of steps, the filler may cross many light thresholds cups with priority level greater than ℓ, while the emptier only removes heavy thresholds from Q (i.e., the emptier empties exclusively from cups of height 2 or greater). This means that, in a given sequence of steps, the number of queued cups with priority level greater than ℓ could increase substantially. We call this the stalled emptier problem . Note that the stalled emptier problem is precisely what enables the connection between tail size and R-unpredictability above, allowing the filler to transform large tail size into a violation of Runpredictability. As a result, any analysis that directly considers the stalled emptier problem must also first bound tail-size, bringing us back to where we started.
Rather than bounding the number of queued cups with priority level greater than ℓ, we instead compare the number of queued cups at priority level greater than ℓ to the number at priority level ℓ. The idea is that, if the stalled-emptier problem allows for the number of queued priority-level greater than ℓ to grow large, then it will allow for the number of queued priority-level-ℓ cups to grow even larger. That is, without proving any absolute bound on the number of cups at a given priority level, we can still say something about the ratio of high-priority cups to low-priority cups in the queue.
To be precise, we prove that, whenever there are k queued cups at some priority level ℓ, there are at most O( √ qk log n + log n) queued cups at priority level > ℓ (recall that q = Θ(log log n) is the number of priority levels). Since the number of cups with priority level at least 1 is deterministically anchored at n, this allows for us to inductively bound the number of queued cups with large priority levels ℓ. In particular, the number of queued cups at priority level q/2 or greater never exceeds O(log n log log n).
Comparing the number of queued cups with priority level ℓ versus > ℓ. Suppose that after some step t, there are some large number k of queued cups with priority level ≥ ℓ. We wish to show that almost all of these k cups have priority level exactly ℓ. Before describing our approach in detail (which we do in the following two subheaders), we give an informal description of the approach. Let k 1 be number of priority-level-ℓ queued cups, and let k 2 be the number of priority-level-greater-than-ℓ queued cups. The only way that there can be a large number k 2 of priority-level-greater-than-ℓ cups queued is if they have all entered the queue since the last time that a level-ℓ cup was dequeued. This means that the size of Q has increased by at least k 2 since the last time that a priority-level-ℓ cup was dequeued. On the other hand, we show that priority-level-ℓ cups accumulate in Q at a much faster rate than the size of Q varies. In particular, we show that both the rate at which priority-level ℓ cups accumulate in Q and the rate at which Q's size varies are controlled by what we call the "influence" of a time-interval, and that the former is always much larger than the latter. This ensures that k 1 ≫ k 2 .
Note that the analysis avoids arguing directly the number of queued high-priority cups small, which could be difficult due to the stalled emptier problem. Intuitively, the analysis instead shows that low priority cups do a good job "pushing" the high priority cups out of the queue, ensuring that the ratio of low-priority cups (i.e., cups with priority level ℓ) to high-priority cups (i.e. cups with priority level > ℓ) is always very large.
Relating the number of high-priority queued cups to changes in |Q|. Let t 0 be the most recent step t 0 ≤ t such that at least k + 1 distinct cups C with priority level ℓ cross thresholds during steps t 0 , . . . , t. (Recall that k is the number of queued cups with priority level ≥ ℓ after step t.) One can think of the steps t 0 , . . . , t as representing a long period of time in which many cups with priority level ℓ have the opportunity to accumulate in Q. We will now show that the use of priorities in the asymmetric smoothed greedy algorithm causes the following property to hold: The number of queued cups with priority level > ℓ after step t is bounded above by the amount that |Q| varies during steps t 0 , . . . , t.
Because Q contains only k queued cups with priority level ≥ ℓ after step t, at least one cup from C must be dequeued during steps t 0 , . . . , t (otherwise, Q would contain at least |C| = k + 1 level-ℓ cups after step t). Let t * be the final step in t 0 , . . . , t out of those that dequeue a cup with priority level ≤ ℓ, and let Q t * and Q t denote the queue after steps t * and t, respectively.
By design, the only way that the asymmetric smoothed greedy algorithm can dequeue a cup with priority level ≤ ℓ at step t * is if the queue Q t * consists exclusively of light thresholds (i.e., thresholds of the form (j, 1)) for cups j with priority level ≤ ℓ. Moreover, the thresholds in Q t * must remain present in Q t , since by the definition of t * no cups with priority level ≤ ℓ are dequeued during steps t * + 1, . . . , t.
Since Q t * ⊆ Q t and Q t * contains only thresholds for cups with priority level ≤ ℓ, the total number of thresholds in Q t for cups with priority level > ℓ is at most |Q t | − |Q t * |. In other words, the only way that a large number of cups with priority level > ℓ can be queued after step t is if the size of Q varies by a large amount during steps t 0 , . . . , t.
Although t − t 0 may be very large compared to k (e.g. poly n) we show that the amount by which |Q| varies during steps t 0 , . . . , t is guaranteed to be small as a function of k, bounded above by O( √ kq log n). This means that, out of the k cups with priority level ≥ ℓ in Q t , at most O( √ kq log n) of them can have priority level ℓ + 1 or larger.
The influence property: bounding the rate at which |Q| varies. The main tool in order to analyze the rate at which Q's size varies is to analyze sequences of steps based on their influence. For sequence of steps I, the influence of I is defined to be n j=1 min(1, c j (I)), where c j (I) is the amount of water poured into each cup j during interval I. We show that, for any priority level ℓ and for any step interval I with influence 2rq for some r, either r = O(log n), or two important properties are guaranteed to hold with high probability: • Step interval I crosses thresholds in at least r cups with priority level ℓ. This is true of any interval I with influence at least 2qr by a simple concentration-bound argument. 8 • The size of Q varies by at most O( √ qr log n) during step interval I. The key here is to show that, during each subinterval I ′ ⊆ I, the number of thresholds crossed by the filler is within O( √ qr log n) of |I ′ |. In order to do this, we take advantage of the initial random offsets r j that are placed into each cup by the algorithm. If the filler puts some number c j (I ′ ) of units of water into a cup j during I ′ , then the cup j will deterministically cross ⌊c j (I ′ )⌋ thresholds, and with probability c j (I ′ ) − ⌊c j (I ′ )⌋ will cross one additional threshold (with the outcome depending on the random value r j ). Since the influence of I ′ is at most 2rq, we know that j (c j (I ′ ) − ⌊c j (I ′ )⌋) ≤ 2rq. That is, if we consider only the threshold crossings that are not certain, then the number of them is a sum of independent 0-1 random variables with mean at most 2rq. By a Chernoff bound, this number varies from its mean by at most O( √ qr log n), with high probability in n.
Combined, we call these the influence property . By a union bound, the influence property holds with high probability on all sub-sequences of steps during the cup game, and for all values r.
The influence property creates a link between the number of cups with priority level ℓ that cross thresholds during a sequence of steps I, and the amount by which |Q| varies during steps I. Applying this link with r = k + 1 to steps t 0 , . . . , t, as defined above, implies that |Q| varies by at most O( √ qk log n) during steps t 0 , . . . , t. This, in turn, bounds the number of queued cups with priority level ℓ + 1 or larger by O( √ qk log n) after step t, completing the analysis.
Extending the analysis to the multi-processor cup game.
The primary difficulty in analyzing the multi-processor cup game (i.e., when p > 1) is that the emptier must remove water from p different cups, even if almost all of the water in the system resides in fewer than p cups. For example, the emptier may dequeue a cup j even though there are up to p − 1 other higher-priority cups that are still queued; furthermore, each of these higher-priority cups may contribute a large number of heavy thresholds to the queue Q.
We solve this issue by leveraging recent bounds on backlog for the p-processor cup game [28], which prove that the deterministic greedy emptying algorithm achieves backlog O(log n). This can be used to ensure that, for any p − 1 cups that are queued, each of them can only contribute a relatively small number of thresholds to the queue Q. These "miss-behaving" thresholds can then be absorbed into the algorithm analysis.
Nearly matching lower bounds on tail size.
Our lower-bound constructions extend the techniques used in past works for backlog [11,19,28] in order to apply similar ideas to tail size. One of the surprising features of our lower bounds is that they continue to be nearly tight even in the multi-processor case -the same is not known to be true for backlog. We defer further discussion of the lower bounds to Section 5.
Lower bounds against unending guarantees. Finally, we consider the question of whether the analysis of the asymmetric smoothed greedy algorithm can be extended to offer an unending guarantee, i.e., a guarantee that for any step t, no matter how large, there a high probability at step t that the backlog and tail size are small.
We show that, without the use of resource augmentation, unending guarantees are not possible for the asymmetric smoothed greedy algorithm, or, more generally, for any monotone memoryless emptying algorithm. Lower bounds against unending guarantees have previously been shown for the multi-processor cup game [28], but remained open for the single-processor cup game.
The filling strategy, which we call the fuzzing algorithm , has a very simple structure: the filler spends a large number (i.e., nΘ (n) ) of steps randomly placing water in multiples of 1/2 into cups 1, 2, . . . , n. The filler then disregards a random cup, which for convenience we will denote by n, and spends a large number of steps randomly placing water into the remaining cups 1, 2, . . . , n − 1.
The filler then disregards another random cup, which we will call cup n − 1 ,and spends a large number of steps randomly placing water into cups 1, 2, . . . , n − 2, and so on. We call the portion of the algorithm during which the filler is focusing on cups 1, 2, . . . , i the i-cup phase.
Rather than describe the analysis of the fuzzing algorithm (which is somewhat complicated), we instead give an intuition for why the algorithm works. For simplicity, suppose the emptier follows the (standard) smoothed greedy emptying algorithm.
Between the i-cup phase and the (i − 1)-cup phase, the filler disregards a random cup (that we subsequently call cup i). Intuitively, at the time that cup i is discarded, there is a roughly 50% chance that cup i has more fill than the average of cups 1, 2, . . . , i. Then, during the (i − 1)-cup phase, there is a reasonably high probability that the filler at some point manages to make all of cups 1, 2, . . . , i − 1 have almost equal fills to one-another. At this point, the emptier will choose to empty out of cup i instead of cups 1, 2, . . . , i − 1. The fact that the emptier neglects cups 1, 2, . . . , i − 1 during the step, even though the filler places 1 unit of water into them, causes their average fill to increase by 1/(i − 1). Since this happens with constant probability in every phase, the result is that, by the beginning of the √ n-cup phase there are √ n cups each with expected fill Formalizing this argument leads to several interesting technical problems. Most notably, the cups 1, 2, . . . , i − 1 having almost equal fills (rather than exactly equal fills) may not be enough for cup i to receive the emptier's attention. Moreover, if we wish to analyze the asymmetric smoothed greedy algorithm or, more generally, the class of monotone memoryless algorithms, then cups are not necessarily judged by the emptying algorithm based on their fill heights, and may instead be selected based on an essentially arbitrary objective function that need not treat cups symmetrically. These issues are handled in Section 6 by replacing the notion of cups 1, 2, . . . , i − 1 having almost equal fills as each other with the notion of cups 1, . . . , i − 1 reaching a certain type of specially designed equilibrium state that interacts well with the emptier.
Algorithm analysis
In this section, we give the full analysis of the p-processor asymmetric smoothed greedy algorithm. The main result of the section is Theorem 4.10, which bounds the tail size of the game by O(log n log log n + p log p) for the first poly n steps of the game with high probability in n.
In addition to using the conventions from Section 2 we find it useful to introduce one additional notation: for a sequence of steps I, define c j (I) to be the amount of water placed into cup j during I. We also continue to use the convention from Section 3 that q is a large constant multiple of log log n, and that each cup j is assigned a priority level given by ⌊p j · q⌋ + 1.
Recall that a cup j crosses a threshold (j, i) whenever the fill of cup j increases from some quantity f < i to some quantity f ′ ≥ i for i ∈ N. A key property of the smoothed greedy algorithm, which was originally noted by Bender et al. [11], is that the number of threshold crossings across any sequence of steps can be expressed using a sum of independent 0-1 random variables. 2 This remains true for the asymmetric smoothed greedy algorithm, and is formalized in Lemma 4.1.
Lemma 4.1 (Counting threshold crossings). For a sequence of steps I, and for a cup j, the number of threshold crossings in cup j is ⌊c j (I)⌋ + X j , where X j is a 0-1 random variable with mean c j (I) − ⌊c j (I)⌋. Moreover, X 1 , X 2 , . . . , X n are independent.
Proof. Recall that the emptier only removes water from a cup j if cup j contains at least 1 unit. Moreover, the emptier always removes exactly 1 unit of water from cups. Since threshold crossings in each cup j depend only on the fractional amount of water (i.e., the amount of water modulo 1) in the cup, the behavior of the emptier cannot affect when thresholds are crossed within each cup.
Let t 0 be the final step prior to interval I. For each cup j, the fractional amount of water in the cup at the beginning of interval I is Since r j is uniformly random in [0, 1], it follows that (1) is as well. The first c j (I) − ⌊c j (I)⌋ units of water poured into cup j during interval I will therefore cross a threshold with probability exactly c j (I) − ⌊c j (I)⌋. The next ⌊c j (I)⌋ units of water placed into cup j are then guaranteed to cause exactly ⌊c j (I)⌋ threshold crossings. The number of crossings in cup j during the step sequence is therefore ⌊c j (I)⌋ + X j , where X j is 0-1 random variable with mean c j (I) − ⌊c j (I)⌋, and where the randomness in X j is due to the random initial offset r j . Because r 1 , r 2 , . . . , r n are independent, so are X 1 , X 2 , . . . , X n .
One consequence of Lemma 4.1 is that, if a sequence of steps I has a large influence s, then each priority level ℓ will have at least Ω(s/q) cups that cross thresholds during interval I (recall that q is the number of priority levels). Proof. By Lemma 4.1, the probability that cup j crosses at least one threshold during step sequence I is min(c j (I), 1), independently of other cups j ′ . The number X of distinct cups that cross thresholds during interval I is therefore a sum of independent indicator random variables with mean s, where s is the influence of I. Since each cup has probability 1 q of having priority level ℓ, the number Y of cups with priority level ℓ that cross thresholds in interval I is a sum of independent indicator random variables with mean s q . If s/q ≤ O(log n), the number of distinct cups with priority level ℓ to cross thresholds is at least 0 ≥ s 2q − O(log n) trivially. Suppose, on the other hand, that s/q ≥ c log n for a sufficiently large constant c. Then by a Chernoff bound, The proofs of the preceding lemmas have not needed to explicitly consider the effect of there being a potentially large number p of processors. In subsequent proofs, the multi-processor case will complicate the analysis in two ways. First, the emptier may sometimes dequeue a cup, even when there are more than p heavy thresholds in the queue (this can happen when the heavy thresholds all belong to a set of fewer than p cups). Second, and similarly, the emptier may sometimes be unable to remove a full p thresholds from the queue Q, even though |Q| > p (this can happen if of the thresholds in Q belong to a set of fewer than p cups). It turns out that both of these problems can be circumvented using the fact that the emptying algorithm achieves small backlog. In particular, this ensures that no single cup can ever contribute more than O(log log n + log p) thresholds to Q: Lemma 4.3 (K. [28]). In any multi-processor cup game of poly n length, the asymmetric smoothed greedy algorithm achieves backlog O(log log n + log p) after each step, with high probability in n.
Using Lemma 4.3 as a tool to help in the case of p > 1, we now return to the analysis approach outlined in Section 3.
Remark 4.4. The proof of Lemma 4.3 given in [28] is highly nontrivial. We remark that, although Lemma 4.3 simplifies our analysis, there is also an alternative lighter weight approach that one can use in place of the lemma. In particular, one can begin by analyzing the h-truncated cup game for some sufficiently large h ≤ O(log log n + log p). In this game, the height of each cup is deterministically bounded above by h, and whenever the height of a cup exceeds h, 1 unit of water is removed from the cup (and that unit does not count as part of the emptier's turn). The h-truncated cup game automatically satisfies the backlog property stated by Lemma 4.3, allowing for it to be analyzed without requiring the lemma. The analysis can then be used to bound the backlog of the h-truncated cup game to at most h/2 with high probability (using the analysis by [28] of the greedy algorithm, applied to theÕ(log n + p) cups in the tail). It follows that with high probability, the h-truncated cup game and the (standard) cup game are indistinguishable. This means that the high-probability bounds on tail size for the h-truncated cup game carry over directly to the the standard cup game.
The next lemma shows that, even though many threshold crossings may occur in a sequence of steps I, the size of the queue Q varies by only a small amount as a function of the influence of I.
Lemma 4.5 (The influence property, part 2). Consider a sequence of steps I with influence at most s during a game of length at most poly n. For each step t ∈ I, let Q t denote the queue after step t. With high probability in n, Proof. We begin with a simpler claim: Claim 4.6. For any subinterval I ′ ⊆ I, the number of threshold crossings during I ′ is within O( √ s log n + log n) of p|I ′ | with high probability in n.
Proof. Because I has influence at most s so does I ′ . Lemma 4.1 tells us that, during I ′ , the number of threshold crossings X that occur satisfies E[X] = p|I ′ |. Moreover, X satisfies X = A + n j=1 X j , where A is a fixed value and the X j 's are independent 0-1 random variables, each taking value 1 with probability c j (I ′ ) − ⌊c j (I ′ )⌋ ≤ min(c j (I ′ ), 1). Note that I ′ has influence j min(c j (I ′ ), 1) ≤ s, and thus E n j=1 X j ≤ s. By a multiplicative Chernoff bound, it follows that for δ < 1, Since E[X] = p|I ′ |, it follows that the number of threshold crossings in interval I ′ is within O( √ s log n + log n) of p|I ′ | with high probability in n.
Applying a union bound to the poly n subintervals of steps I ′ ⊆ I, Claim 4.6 tells us that every subinterval I ′ ⊆ I contains p|I ′ | ± O( √ s log n + log n) threshold crossings with high probability in n.
To complete the proof, consider some subinterval I ′ ⊆ I and let m be the (absolute) amount by which Q changes in size during I ′ . We wish to show that m ≤ O( √ s log n+log n+p(log log n+log p)).
Suppose that |Q| shrinks by m during I ′ . Then the number of threshold crossings in subinterval I ′ would have to be at most p|I ′ | − m, meaning that m ≤ O( √ s log n + O(log n)), as desired.
Suppose, on the other hand, that |Q| grows by m during I ′ . Call a step t ∈ I ′ removal-friendly if the emptier removes p full units of water during step t (i.e., prior to the emptier removing water, there are at least p cups with height 1 or greater). By Lemma 4.3, with high probability in n, the size of Q after any removal-unfriendly step is at most O(p(log log n + log p)). If I ′ consists exclusively of removal-friendly steps, then the filler must cross at least p|I ′ | + m threshold crossings in order to increase |Q| by m; thus m ≤ O( √ s log n + log n). On the other hand, if I ′ contains at least one removal-unfriendly step, then there must be some last such step t in I ′ = (t 0 , t 1 ]. Since |Q| ≤ O(p(log log n + log p)) after step t but |Q| ≥ m after step t 1 , it must be that during the steps (t, t 1 ] the size of Q increases by at least m − O(p(log log n + log p)). Since the interval (t, t 1 ) consists entirely of removal-friendly steps, we can apply the reasoning from the first case (i.e., the case of only removal-friendly steps) to deduce that m ≤ O( √ s log n + log n + p(log log n + log p)), completing the proof.
Combined, Lemmas 4.2 and 4.5 give the influence property discussed in Section 3. Using this property, we can now relate the number of queued cups with priority level ≥ ℓ to the number of queued cups with priority level ≥ ℓ + 1 for a given priority level ℓ ∈ N.
Lemma 4.7 (Accumulation of low-priority cups). Let t ≤ poly n, let K be the number of queued cups with priority level ≥ ℓ after step t, and m be the number of queued cups with priority level ≥ ℓ + 1 after step t. With high probability in n, (2) m ≤ O( qK log n + log n + p(log log n + log p)).
Proof. For each k ∈ {1, 2, . . . , n}, define I k to be the smallest step-interval ending at step t and with influence at least 2qk (or define I k = [1, t] if the total influence of [1, t] is less than 2qk). By Lemmas 4.2 and 4.5, each I k satisfies the following two properties with high probability in n: • The many-crossings property. Either I k is all of [1, t], or the number of priority-level-ℓ cups to cross thresholds during I k is at least k − O(log n). • The low-variance property. The size of Q varies by at most B k := O( √ qk log n + log n + p(log log n + log p)) during I k . To see this, we use the fact that I k has influence at most 2qk + p, which by Lemma 4.5 limits the amount by which Q varies to O( (qk + p) log n + log n + p(log log n + log p)).
Since (qk + p) log n ≤ √ qk log n + √ p log n ≤ √ qk log n + p + log n, it follows that the amount by which Q varies during I k is at most O( √ qk log n + log n + p(log log n + log p)), with high probability in n. Collectively, this pair of properties is called the influence property . By a union bound, the influence property holds for all k ∈ {1, 2, . . . , n} with high probability in n. It follows that the property also holds for k = K (recall K is the number of queued cups with priority level ≥ ℓ after step t). If , then the total size of Q can be at most B K (since Q begins as size 0 at the start of I K ). It follows that m ≤ B K , meaning that (2) is immediate. In the rest of the proof, we focus on the case in which If the emptier never dequeues any priority-level-ℓ cups during I K , then by the many-crossings property, there are at least K − O(log n) priority-level-ℓ cups queued after step t. The number of queued cups with priority levels greater than ℓ is therefore at most O(log n), as desired.
Suppose, on the other hand, that there is at least one step in I K at which the emptier dequeues a priority-level-ℓ (or smaller) cup, and let t * be the last such step. Let Q t * be the set of queued thresholds after t * , and let Q t be the set of queued thresholds after step t. Call a threshold in Q t * permanent if it is a light threshold for a cup with priority level ≤ ℓ. All permanent thresholds in Q t * are guaranteed to also be in Q t , since t * is the final step in I K during which the emptier dequeues such a threshold. The non-permanent thresholds in Q t * must reside in a set of fewer than p cups, since the emptier would rather have dequeued one of them during step t * than to have dequeued a cup with priority level ≤ ℓ. By Lemma 4.3, the number of non-permanent thresholds in Q t * is therefore at most O(p log log n + p log p), with high probability in n.
By the low-variance property, the sizes of Q t and Q t * differ by at most B K . It follows that the permanent thresholds in Q t * make up all but Recall that Q t contains thresholds from K different cups with priority level ≥ ℓ. It follows that Q t * contains permanent thresholds from at least K − O(B K ) different cups with priority level ≥ ℓ. The permanent thresholds in Q t * are all for cups with priority level ≤ ℓ, however. Thus there are at least K − O(B K ) cups with priority level ℓ that are queued after step t * and remain queued after step t. This bounds the number of queued cups after step t with priority level greater than ℓ by at most O(B K ), completing the proof.
Since the number of queued cups with priority level ≥ 1 can never exceed n, Lemma 4.7 allows for us to bound the number of queued cups with priority level ≥ ℓ inductively. We argue that if q is a sufficiently large constant multiple of log log n, then the number of queued cups with priority level ≥ q/2 never exceeds O(log n log log n + p(log log n + log p)), with high probability in n. This can then be used to obtain (log n log log n + p log p)-unpredictability, as defined in Section 3.
Lemma 4.8 (The unpredictability guarantee). Consider a cup game of length poly n. For any step t, and for any set of cups S whose size is a sufficiently large constant multiple of log n log log n + p log p, at least one cup in S is not queued after step t, with high probability in n. In other words, each step t in the game satisfies (log n log log n + p log p)-unpredictability.
Proof. Suppose the number of priority levels q is set to be a sufficiently large constant multiple of log log n. For ℓ ∈ {1, 2, . . . , q}, let m ℓ denote the maximum number of queued cups with priority level ≥ ℓ during the game. We claim that m q/2 ≤ O(log n log log n + p log p) with high probability in n.
By Lemma 4.7, for any 1 ≤ ℓ < q, m ℓ+1 ≤ O( qm ℓ log n + log n + p log log n + p log p), with high probability in n. If we define X = q log n + log n + p log log n + p log p, then it follows that, For each priority level ℓ, let δ ℓ be the ratio δ ℓ = m ℓ X . By (3), for any 1 ≤ ℓ < q, either δ ℓ+1 ≤ O(1) or δ ℓ+1 ≤ O( δ ℓ ). It follows that, as long as q is a sufficiently large constant multiple of log log n, then δ q/2 ≤ O(1), and thus m q/2 ≤ O(X).
Now consider a set of cups S of size at least cX, where c is a sufficiently large constant. By a Chernoff bound, the number of cups with priority level greater than q/2 in S is at least cX/4, with high probability in n. Since c is a sufficiently large constant, and since m q/2 ≤ O(X), this implies that S contains more than m q/2 cups with priority level q/2 or greater. Thus the priority-level-ℓ cups in S cannot all be queued after step t.
Note that X ≤ O(log n log log n + p log p). Thus every set S whose size is a sufficiently large constant multiple of log n log log n+p log p has high probability of containing at least one non-queued cup after step t, completing the proof of log n log log n + p log p-unpredictability.
To complete the analysis of the algorithm, we must formalize the connection between the unpredictability guarantee and tail size. Lemma 4.9. Suppose that a (randomized) emptying algorithm for the p-processor cup game on n cups satisfies R-unpredictability in the steps of any game of polynomial length, and further satisfies the "greediness property" that whenever there is a cup of height at least 2, the algorithm empties out of such a cup. Then in any game of polynomial length, the tail size s after each step t is O(R + p) with high probability in n.
Proof. Consider a polynomial f ∈ poly n, let c be a large constant, and let t ≤ poly n. Suppose for contradiction that there is a filling strategy such that, at time t the tail size is at least cR + p with probability at least 1/f (t). Since the tail size is at least cR + p at time t, then during each of steps I = {t + 1, . . . , t + ⌈cR/p⌉}, the emptier removes water exclusively from cups with fills at least 2. This means that the set of cups containing 1 or more units of water is monotonically increasing during steps t + 1, . . . , t + ⌈cR/p⌉. If the filler places 1 unit of water into each cup 1, 2, . . . , cR during steps t + 1, . . . , t + ⌈cR/p⌉, then it follows that each of cups 1, 2, . . . , cR has fill 1 or greater after step t + ⌈cR/p⌉.
The preceding construction guarantees that, with probability at least 1/f (x), all of cups 1, 2, . . . , cR contain at least 1 unit of water at step t + ⌈cR/p⌉. If c is a sufficiently large constant, this violates R-unpredictability at step t + ⌈cR/p⌉, a contradiction.
Using the unpredictability guarantee, we can now complete the algorithm analysis: Theorem 4.10. Consider a p-processor cup game that lasts for poly n steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in n, the number of cups containing 2 or more or units of water never exceeds O(log n log log n + p log p) and the backlog never exceeds O(log log n + log p) during the game.
Proof. By Lemma 4.8, (log n log log n + p log p)-unpredictability holds for each step in any game of length poly n. By Lemma 4.9, it follows that the tail size remains at most O(log n log log n+p log p), with high probability in n, during any game of length poly n. Lemma 4.3 bounds the height of the fullest cup in each step by O(log log n + log p) with high probability in n. Alternatively, by considering a cup game consisting only of the cups that contain greater than 2 units of water, the analysis of the deterministic greedy emptying algorithm (see [1] for p = 1 and [28] for p > 1) on O(log n log log n + p log p) cups implies that no cup ever contains more than O(log log n + log p) water, with high probability in n.
Lower Bounds
In this section we prove that the asymmetric smoothed greedy algorithm achieves (near) optimal tail size within the class of backlog-bounded algorithms.
An emptying algorithm is backlog bounded if the algorithm guarantees that the backlog never exceeds f (n) for some polynomial f . This is a weak requirement in that that the greedy algorithm achieves a bound of O(log n) on backlog [1,28]. The main result in this section states that any backlog-bounded emptying algorithm must allow for a tail size ofΩ(log n + p) with probability 1 poly n . The lower bound continues to hold, even when the height-requirement for a cup to be in the tail is increased to an arbitrarily large constant (rather than 2). When p = 1, the lower bound also applies to non-backlog-bounded emptying algorithms (Lemma 5.3).
Theorem 5.1. Let c 1 be a constant, and suppose n ≥ p + c 2 for sufficiently large constant c 2 . For any backlog-bounded emptying strategy, there is a poly n-step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least Ω(log n/ log log n + p) cups with fill c 1 or greater, with probability at least 1 poly n .
To prove Theorem 5.1, we begin by describing a simple lower-bound construction that we call the (p, k, c)-filling strategy. The strategy is structurally similar to the lower-bound construction for backlog given by Bender et al [11].
Lemma 5.2. Let k, c ∈ N such that c ≥ 2, k ≤ n, and k pe c ≥ 2. Then there exists an O(k)-step oblivious randomized filling strategy for the p-processor cup game on n cups that causes Ω(k/e c ) cups to each have fill at least Θ(c), with probability at least 1 k k . We call this strategy the (p, k, c)filling strategy.
Proof. Define the (p, k, c)-filling strategy for the p-processor cup game on n ≥ k cups as follows. In each step i of the strategy, the filler places p k−p(i−1) units of water into each of k − p(i − 1) cups. The sets of cups S i used in each step i are selected so that S i+1 = S i \ {x 1 , x 2 , . . . , x p } for some random distinct x 1 , x 2 , . . . , x p ∈ S i . The (p, k, c)-filling strategy completes after t steps where t = ⌊ k p (1 − e −c )⌋ − 1. Note that k − pi ≥ p for every step i. We say that the (p, k, c)-filling strategy succeeds if at the beginning of each step i none of the cups in S i have been touched (i.e., emptied from) by the emptier. If the (p, k, c)-filling strategy succeeds, then at the end the i-th step of the strategy there will be k − pi cups each with fill where the first equality uses the fact that k − pi ≥ p. Now consider the final step t of a successful (p, k, c)-filling strategy. By the requirement that k pe c ≥ 2, It follows that, after step t of a successful (p, k, c)-filling strategy, there are at least Θ(k/e c ) cups, each with fill at least Ω log k k/(2e c ) = Ω(c).
Next we evaluate the probability of a (p, k, c)-filling strategy being successful. If the first i steps of the (p, k, c)-filling strategy all succeed, then the (i + 1)-th step has probability at least 1 k p of succeeding. In particular, the emptier may touch up to p cups j 1 , . . . , j p ∈ S i during step i, and then the set S i+1 = S i \ {x 1 , . . . , x p } has probability at least 1 k p of removing a superset of those cups from S i to get S i+1 . Since there are at most k/p steps, the (p, k, c)-filling strategy succeeds with probability at least 1 k k .
If we assume that log n/ log log n is a sufficiently large constant multiple of p, then we can apply Lemma 5.2 directly to achieve a tail size of sizeΩ(log n). (Furthermore, note that Lemma 5.3 does not have any requirement that the emptier be backlog-bounded.) Lemma 5.3. Let c 1 be a positive constant, and suppose p ≤ log n c 2 log log n for a sufficiently large constant c 2 (where c 2 is large relative to c 1 ). Then there is an O(log n/ log log n)-step oblivious randomized filling strategy for the p-processor cup game on n cups that causes Ω(log n) cups to all have height c 1 or greater after some step t ≤ poly n with probability at least 1 poly n .
Proof. Let c ∈ N be a sufficiently large constant compared to c 1 . By assumption that c 2 is sufficiently large in terms of c 1 , we may also assume that (4) log n/ log log n pe c ≥ 2.
By (4), we can use Lemma 5.2 to analyze the (p, log n/ log log n, c)-filling strategy. The strategy causes Ω log n log log n · e −c ≥ Ω(log n/ log log n) cups to all have height at least c 1 with probability at least (log n/ log log n) − log n/ log log n ≥ 1 poly n .
The next lemma gives a filling strategy for achieving tail size Ω(p) against any backlog-bounded emptying strategy. Remarkably, the construction in Lemma 5.4 succeeds with probability 1−e −Ω(p) (rather than with probability 1/ poly n).
Lemma 5.4. Let c 1 be a constant, and suppose n ≥ p + c 2 for sufficiently large constant c 2 . For any backlog-bounded emptying strategy, there is a poly n-step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least Ω(p) cups with fill c 1 or greater, with probability 1 − e −Ω(p) .
Proof. For the sake of simplicity, we allow for the filler to sometimes swap two cups, meaning that the labels of the cups are interchanged.
The basic building block of the algorithm is a mini-phase, which consists of O(1) steps. In each step of a mini-phase the filler places 1 unit of water into each of cups 1, 2, . . . , p − 1, and then strategically distributes 1 additional unit of water among cups p, p + 1, . . . , n. Using the final unit of water, the filler follows a (1, ce c , c)-filling strategy on cups p, p + 1, . . . , n, where c is a sufficiently large constant relative to c 1 satisfying n ≥ p + ce c . We say that a mini-phase succeeds if the emptier removes only 1 unit of water from cups {p, p + 1, . . . , n} during each step in the mini-phase, and the (1, ce c , c)-filling strategy succeeds within the mini-phase. By the Lemma 5.2, any successful mini-phase will cause at least one cup j to have fill at least c 1 at the end of the mini-phase (and the filler will know j).
Mini-phases are composed together by the filler to get phases. During each i-th phase, the filler selects a random w i ∈ [1, f (n)n 2 ] and performs w i mini-phases (recall that f (n) is the polynomial such that the emptier achieves backlog f (n) or smaller). After the w i -th mini-phase, the filler swaps cups i and j, where i is the phase number and j is the the cup containing fill ≥ c 1 in the event that the most recent mini-phase succeeded. The full filling algorithm consists of p − 1 phases.
We claim that each phase i has constant probability of ending in a successful mini-phase (and thus swapping cup i with a new cup j ≥ p having fill ≥ c 1 ). Using this claim, one can complete the analysis as follows. If the swap in phase i is at the end of a successful mini-phase, then after the swap, the (new) cup i will have fill ≥ c 1 , and will continue to have fill ≥ c 1 for the rest of the filling algorithm, since the filler puts 1 unit in cup i during every remaining step. At the end of the algorithm, the number of cups with fill ≥ c 1 is therefore bounded below by a sum of p − 1 independent 0-1 random variables with total mean Ω(p). This means that the number of such cups with fill ≥ c 1 is at least Ω(p) with probability 1 − e −Ω(p) , as desired.
It remains to analyze the probability that a given phase i ends with a successful mini-phase. Call a mini-phase clean if the emptier removes 1 unit of water from each cup 1, 2, . . . , p − 1 during each step of the mini-phase, and dirty otherwise. Because each dirty mini-phase increases the total amount of water in cups 1, 2, . . . , p − 1 by at least 1, and because the emptying algorithm prevents backlog from ever exceeding f (n), there can be at most O(pf (n)) dirty mini-phases during phase i.
By Lemma 5.2, each mini-phase (independently) has at least a constant probability of either being dirty or of succeeding. Out of the f (n)n 2 possible mini-phases in phase i, there can only be O(f (n)p) ≤ o(f (n)n 2 ) dirty mini-phases. It follows that, with probability 1 − e −Ω(f (n)n 2 ) , at least a constant fraction of the possible mini-phases s succeed (or would have succeeded in the event that w i were at least as large as s). Thus the w i -th mini-phase succeeds with constant probability.
Combining the preceding lemmas, we prove Theorem 5.1.
Proof of Theorem 5.1. If p ≥ Ω(log n/ log log n), then the theorem follows immediately from Lemma 5.4. On the other hand, if p is a sufficiently large constant factor smaller than log n/ log log n, then the theorem follows from Lemma 5.3.
Lower Bounds Against Unending Guarantees
In this section, we prove upper and lower bounds for unending guarantees, which are probabilistic guarantees that hold for each step t, even when t is arbitrarily large. As a convention, we will use f j (t) to denote the fill of cup j after step t.
The main result of the section is a lower bound showing that no monotone stateless emptier can achieve an unending guarantee of o(log n) backlog. Definition 6.1. An emptier is said to be stateless if the emptier's decision depends only on the state of the cups at each step. An emptier is said to be monotone if the following holds: given a state S of the cups in which the emptier selects some cup j to empty, if we define S ′ to be S except that the amount of water in some cup i = j has been reduced, then the emptier still selects cup j in state S ′ . A monotone stateless emptier is any emptier that is both monotone and stateless.
The monotonicity and stateless property dictate only how the emptier selects a cup j in each step. Once a cup j is selected the emptier is permitted to either (a) remove 1 full unit of water from that cup, or (b) skip their turn. This decision is allowed to be an arbitrary function of the state of the cups.
We begin in Section 6.1 by showing that all monotone stateless emptiers can be modeled as using a certain type of score function to make emptying decisions.
In Section 6.2, we give an oblivious filling strategy, called the fuzzing algorithm , that prevents monotone stateless emptiers from achieving unending probabilistic guarantees of o(log n) backlog (in fact, the filling strategy places an expected Θ(n 2/3 log n) water into Θ(n 2/3 ) cups, meaning that bounds on tail size are also not viable, unless backlog is allowed to be polynomially large). The fuzzing algorithm is named after what is known as the fuzzing technique [39] for detecting security vulnerabilities in computer systems -by barraging the system with random noise, one accidentally discovers and exploits the structural holes of the system. In Section 6.3 we show that the fuzzing algorithm continues to prevent unending guarantees, even when the emptier is equipped with a global clock, allowing for the emptier to adapt to the number of steps that have occurred so far in the game.
Finally, in Sections 6.4 and 6.5, we determine the exact values of the resource-augmentation parameter ε for which the smoothed greedy and asymmetric smoothed greedy emptying algorithms achieve single-processor unending guarantees. In particular, we show that the minimum attainable value of ε is 2 − polylog n . 6.1. Score-Based Emptiers. In this section, we prove an equivalence between monotone stateless emptiers, and what we call score-based emptiers. We then state several useful properties of scorebased emptiers.
A score-based emptier has score functions σ 1 , σ 2 , . . . , σ n . When selecting which cup to empty from, the emptier selects the cup j whose fill f j maximizes σ j (f j ). The emptier can then select whether to either (a) remove 1 full unit of water from the cup, or (b) skip their turn; this decision is an arbitrary function of the state of the cups. The score functions are required to be monotonically increasing functions, meaning that σ i (a) < σ i (b) whenever a < b. Moreover, in order to break ties, all of the scores in the multiset {σ i (j/2) | i ∈ [n], j ∈ Z + } are required to be distinct. (We only consider fills of the form j/2 because in our lower bound constructions all fills will be multiples of 1/2.) It is easy to see that any score-based emptier is also a monotone stateless emptier. The following theorem establishes that the other direction is true as well: Theorem 6.2. Consider cup games in which the filler always places water into cups in multiples of 1/2. For these cup games, every monotone stateless emptying algorithm is equivalent to some score-based emptying algorithm.
For a set of cups 1, 2, . . . , k, a state of the cups is a tuple S = S(1), S(2), . . . , S(k) , where S(j) indicates the amount of water in cup j. Throughout this section we will restrict ourselves to states where S(j) is a non-negative integer multiple of 1/2.
In order to prove Theorem 6.2, we first derive several natural properties of monotone stateless emptiers. We say that the pair (j 1 , r 1 ) dominates the pair (j 2 , r 2 ) if either (a) j 1 = j 2 and r 1 > r 2 , or if (b) j 1 = j 2 and in the cup state where the only two non-empty cups are j 1 and j 2 with r 1 and r 2 water, respectively, the emptier selects cup j 1 . We say that a cup j 1 dominates a cup j 2 in a state S if (j 1 , r 1 ) dominates (j 2 , r 2 ), where r 1 and r 2 are the amounts of water in cups j 1 and j 2 , respectively, in state S.
The next lemma shows that the emptiers decision in each step is determined by which cup dominates the other cups. Lemma 6.3. Let S be any state of the cups 1, 2, . . . , n, and suppose the emptier is following a monotone stateless algorithm. Then the cup j that the emptier selects from S is the unique cup j that dominates all other cups.
Proof. It suffices to show that cup j dominates all other cups, since only one cup can have this property. Consider a cup j ′ = j, and let r 1 and r 2 be the amounts of water in cups j and j ′ , respectively, in state S. Let S ′ be the state in which the only non-empty cups are j and j ′ with r 1 and r 2 units of water, respectively. By the monotonicity property of the emptier, it must be that the emptier selects cup j over cup j ′ in state S ′ . Thus cup j dominates cup j ′ in state S, as desired.
Proof. We begin by considering the case where j 1 , j 2 , j 3 are distinct. Consider the cup state S in which the only three cups that contain water are j 1 , j 2 , j 3 , and they contain r 1 , r 2 , r 3 water, respectively. By Lemma 6.3, one of cups j 1 , j 2 , j 3 must dominate the others. Since j 2 is dominated by j 1 and j 3 is dominated by j 2 , it must be that j 1 is the cup that dominates. Thus (j 1 , r 1 ) dominates (j 3 , r 3 ), as desired.
Next we consider the case where j 1 = j 2 and j 2 = j 3 . Suppose for contradiction that (j 1 , r 1 ) does not dominate (j 3 , r 3 ). Consider the cup state S in which j 1 and j 3 are the only cups containing water, and they contain r 1 and r 3 units of water, respectively. In state S, cup j 3 dominates cup j 1 . By monotonicity, it follows that if we decrease the fill of j 2 to r 2 , then cup j 3 must still dominate cup j 1 = j 2 . But this means that (j 3 , r 3 ) dominates (j 2 , r 2 ), a contradiction.
Next we consider the case where j 1 = j 2 and j 2 = j 3 . Consider the cup state S in which j 1 and j 2 are the only cups containing water, and they contain r 1 and r 2 units of water, respectively. In state S, cup j 1 dominates cup j 2 . By monotonicity, it follows that if we decrease the fill of j 2 to r 3 , then cup j 1 must still dominate cup j 2 = j 3 . This means that (j 3 , r 3 ) dominates (j 2 , r 2 ), as desired.
Finally we consider the case where j 1 = j 2 = j 3 . In this case it must be that r 1 > r 2 and r 2 > r 3 . Thus r 1 > r 3 , meaning that (j 1 , r 1 ) dominates (j 3 , r 3 ), as desired.
By exploiting the transitivity of the domination property, we can now prove Theorem 6.2.
We conclude the section by observing a useful property of score-based emptiers, namely the existence of what we call equilibrium states.
We say that a state on k cups is an equilibrium state if for every pair of distinct cups i, j ∈ {1, 2, . . . , k}, σ i (S(i) + 1/2) > σ j (S(j)). That is, for any cup i, if 1/2 unit of water is added to any cup i, then that cup's score function will exceed the score function of all other cups {1, 2, . . . , k}\{i}. Lemma 6.5. Consider cups 1, 2, . . . , k, and suppose their total fill m is a non-negative integer multiple of 1/2. For any set of score functions, σ 1 , . . . , σ k , there is a unique equilibrium state for cups 1, 2, . . . , k in which the total amount of water in the cups is m.
Proof. Consider any state S = S(1), . . . , S(k) for cups 1, 2, . . . , k in which the total fill of the cups is m. Define the score severity of S to be max i∈[k] σ i (S(i)). If S is not an equilibrium state, then we can move 1/2 units of water from some cup i ∈ {1, 2, . . . , k} to some other cup j ∈ {1, 2, . . . , k} in a way that decreases the score severity of S.
Let S m be the set of states for cup 1, 2, . . . , k in which each cup contains a multiple of 1/2 units of water, and in which the total amount of water in cups is m. Since S m is finite, there must be a state S ∈ S m with minimum score severity. By the preceding paragraph, it follows that S is an equilibrium state.
Finally, we prove uniqueness. Suppose S, S ′ are distinct equilibrium states in S m . Then some cup i in S ′ must have greater fill than the same cup i in S. But by the equilibrium property, adding 1/2 units of water to cup i in S increases the score function of cup i to be larger than any other cup's score function in S. Thus S ′ must have a larger score severity than does S. Likewise, S must have a larger score severity than S ′ , a contradiction.
6.2. The Oblivious Fuzzing Filling Algorithm. In this section, we describe a simple filling algorithm that, when pitted against a score-based emptier, achieves backlog Ω(log n) after n Θ(n log n) steps with at least constant probability. Note that, throughout this section, we focus only on cup games that do not have resource augmentation.
The filling strategy, which we call the oblivious fuzzing algorithm has a very simple structure. At the beginning of the algorithm, the filler randomly permutes the labels 1, 2, . . . , n of the cups. The filler then begins their strategy by spending a large number (i.e., n Θ(n log n) ) of steps randomly placing water into cups 1, 2, . . . , n. The filler then disregards cup n (note that cup n is a random cup due to the random-permutation step!), and spends a large number of steps randomly placing water into cups 1, 2, . . . , n − 1. The filler then disregards cup n − 1 and spends a large number of steps randomly placing water into cups 1, 2, . . . , n − 2, and so on.
Formally, the oblivious fuzzing algorithm works as follows. Let c ∈ N be a sufficiently large constant, and relabel the cups (from the filler's perspective) with a random permutation of 1, 2, . . . , n. The filling strategy consists of n phases of n cn steps. The i-th phase is called the (n − i + 1)-cup phase because it focuses on cups 1, 2, . . . , (n − i + 1). In each step of the i-th phase, the filler selects random values x 1 , x 2 ∈ {1, 2, . . . , n − i + 1} uniformly and independently, and then places 1 2 water into each of cups x 1 , x 2 . If x 1 = x 2 , then the cup x 1 receives a full unit of water.
One interesting characteristic of the oblivious fuzzing algorithm is that it represents a natural workload in the scheduling problem that the cup game models. One can think of the cups as representing n tasks and water as representing work that needs to be scheduled. In this scheduling problem, the oblivious fuzzing filling algorithm simply assigns work to tasks at random, and selects one task every n cn log n steps to stop receiving new work.
In this section, we prove the following theorem.
Theorem 6.6. Consider a cup game on n cups. Suppose that the emptier follows a score-based emptying algorithm, and that the filler follows the oblivious fuzzing filling algorithm. Then at the beginning of the n 2/3 -cup phase, the average fill of cups 1, 2, . . . , n 2/3 is Ω(log n), in expectation.
For each ℓ ∈ {1, 2, . . . , n−1}, call a step t in the ℓ-cup phase emptier-wasted if the emptier fails to remove water from any of cups 1, 2, . . . ℓ during step t (either because the emptier skips their turn, or because the emptier selects a cup j > ℓ). We show that for each ℓ ∈ {n 2/3 +1, n 2/3 +2, . . . , n−1}, the ℓ-cup phase has at least Ω(1) emptier-wasted steps in expectation (or the average height of cups in that phase is already Ω(log n)). During an emptier-wasted step t, the total amount of water in cups 1, 2, . . . , ℓ increases by 1 (since the filler places water into the ℓ cups, and the emptier does not remove water from them). It follow that, during the ℓ-cup phase, the average amount of water in cups 1, 2, . . . , ℓ increases by Ω 1 ℓ in expectation. Applying this logic to every phase gives Theorem 6.6. The key challenge is show that, within the ℓ-cup phase, the expected number of emptier-wasted steps is Ω(1).
For each ℓ ∈ {1, 2, . . . , n − 1}, define the initial water level m ℓ of the ℓ-cup phase to be the total amount of water in cups 1, 2, . . . , ℓ + 1 at the beginning of the phase. Define the equilibrium state E ℓ = E ℓ (1), . . . , E ℓ (ℓ + 1) for the ℓ-cup phase to be the equilibrium state for cups 1, 2, . . . , ℓ + 1 in which the total amount of water is m ℓ + 1 (note that E ℓ exists and is unique by Lemma 6.5). One can think of m ℓ +1 as representing the total amount of water in cups 1, 2, . . . , ℓ+1 after the filler places 1 unit of water into the cups at the beginning of the first step in the ℓ-cup phase.
Define the bolus b ℓ of the ℓ-cup phase as follows. If r is the amount of water in cup ℓ + 1 at the beginning of the ℓ-cup phase, then b ℓ = max(0, r − E ℓ (ℓ + 1)). That is, b ℓ is the amount by which cup ℓ + 1 exceeds its equilibrium fill.
We begin by showing that, if m ℓ ≤ O(n log n), then the expected number of emptier-wasted steps in phase ℓ is at least E[b ℓ /2]. The basic idea is that, whenever fewer than b ℓ emptier-wasted steps have occurred, the filler has some small probability of reaching a state in which all of cups 1, 2, . . . , ℓ have fills no greater than E ℓ (1), E ℓ (2), . . . , E ℓ (ℓ), respectively. If this happens, then the score function of cup ℓ + 1 will exceed that of any of cups 1, 2, . . . , ℓ, and an emptier-wasted step occurs. Thus, whenever fewer than b ℓ emptier-wasted steps have occurred, the filler has a small probability of incurring an emptier-wasted step (within the next O(n log n) steps). Since the ℓ-cup phase is very long, the filler has many opportunities to induce an emptier-wasted step in this way. It follows that, with high probability, there will be at least b ℓ emptier-wasted steps in the ℓ-cup phase. Lemma 6.7 presents this argument in detail.
Lemma 6.7. Let ℓ ∈ [n − 1], condition on m ℓ ≤ n log n, and condition on some value for b ℓ . Under these conditions, the the expected number of emptier-wasted steps in the ℓ-cup phase is at least b ℓ /2.
Proof. Call a step in the ℓ-cup phase equilibrium-converging if for each cup j that the filler places water into, the fill x of cup j after the water is placed satisfies x ≤ E ℓ (j). One can think of an equilibrium-converging step as being a step in which the filler's behavior pushes each cup j towards its equilibrium state, without pushing any cups above their equilibrium state.
Call a step in the ℓ-cup phase a convergence-enabling if the total amount of water in cups 1, 2, . . . , ℓ is less than ℓ i=1 E ℓ (i) − 1 at the beginning of the step. Convergence-enabling steps have two important properties. The Convergence Property: For any convergence-enabling step t, there is some pair of cups j, k (possibly j = k) that the filler can place water into in order so that the step is equilibrium converging. Thus, whenever a convergence-enabling step occurs, there is probability of at least 1/ℓ 2 that the step is equilibrium converging. The Bolus Property: At the beginning of any convergence-enabling step, the amount of water in cup ℓ + 1 must be greater than E ℓ (ℓ + 1). This is a consequence of the fact that the total amount of water in cups 1, . . . , ℓ + 1 is at least Break the ℓ-cup phase into sequences of steps L 1 , L 2 , L 3 , . . ., where each L i is 2n log n steps. We begin by showing that, if L i contains a convergence-enabling step and consists of only equilibriumconverging steps, then L i must also contain at least one emptier-wasted step.
Claim 6.8. Suppose m ℓ ≤ n log n. Suppose that the first step of L i is convergence-enabling. If all of the steps in L i are equilibrium converging, then at least one of the steps must be emptier-wasted.
Proof. At the end of each step t, let f j (t) denote the amount of water in each cup j. Define the potential function φ(t) to be otherwise.
Since the first step of L i is convergence-enabling, the total amount of water in the cups at the beginning of L i is at most ℓ i−1 E ℓ (i) − 1 ≤ m ℓ ≤ n log n. It follows that, at the beginning of L i , the potential function φ is at most n log n + n ≤ 2n log n − 1.
Whenever a step t is both equilibrium-converging and non-emptier-wasted, we have that either φ(t − 1) = 0 or φ(t) < φ(t − 1) − 1. Since φ is at most 2n log n − 1 at the beginning of L i , we cannot have φ(t) < φ(t − 1) − 1 for every step in L i . Thus, if every step in L i is equilibrium converging, then there must be at least one step that is either emptier-wasted or that satisfies φ(t − 1) = 0.
To complete the claim, we show that if there is at least one step in L i for which φ(t − 1) = 0, and step t is equilibrium-converging, then there also be at least one emptier-wasted step. Suppose φ(t − 1) = 0, that step t is equilibrium-converging, and that no steps in L i are emptier-wasted. Since there are no emptier-wasted steps in L i , every step in L i must be equilibrium-enabling, and thus cup ℓ + 1 contains more than E ℓ (ℓ + 1) water at the beginning of step t (by the Bolus Property of equilibrium-enabling steps). Since φ(t − 1) = 0 and step t is equilibrium converging, the cups 1, 2, . . . , ℓ contain fills at most E ℓ (1), E ℓ (2), . . . , E ℓ (ℓ), respectively, after the filler places water in step t. It follows that, during step t, the emptier will choose cup ℓ + 1 over all of cups 1, 2, . . . , ℓ. Thus step t is an emptier-wasted step, a contradiction.
Next we use Claim 6.8 in order to show that, if m ℓ ≤ n log n and L i contains a convergenceenabling step, then L i has probability at least 1/n 4n log n of containing an emptier-wasted step. Claim 6.9. Condition on the fact that the first step of L i is convergence-enabling and that m ℓ ≤ n log n. Then L i contains an emptier-wasted step with probability at least 1/n 4n log n .
Proof. Since the first step of L i is convergence-enabling, either every step of L i is convergenceenabling or there is at least one emptier-wasted step. Recall by the Convergence Property that each convergence-enabling step has probability at least 1/n 2 of being equilibrium-converging. Thus there is probability at least 1/n 4n log n that every step of L i (up until the first emptier-wasted step) is equilibrium-converging. By Claim 6.8, it follows that the probability of there being an emptier-wasted step is at last 1/n 4n log n .
We can now complete the proof of the lemma. For each L i , if the number of emptier-wasted steps in L 1 , . . . , L i−1 is less than ⌈b ℓ ⌉, then the first step of L i is convergence-enabling. Since m ℓ ≤ n log n, then by Claim 6.9, it follows that L i has probability at least 1/n 4n of containing an emptier-wasted step. Now collect the L i 's into collections of size n 4n log n+1 , so that the k-th collection is given by Note that, as long as the constant c used to define the fuzzing algorithm is sufficiently large, then the l-cup phase is long enough so that it contains at least ⌈b ℓ ⌉ ≤ m ℓ ≤ n log n collections C 1 , C 2 , . . . , C ⌈b ℓ ⌉ . Say that a step collection C i failed if, at the beginning of the step collection, the number of emptier-wasted steps that have occurred is less than b ℓ , and C i contains no emptier-wasted steps. The probability of a given C i failing is at most, 1 − 1/n 4n log n n 4n log n+1 ≤ 1/e n .
It follows that the probability of any of C 1 , . . . , C ⌈b ℓ ⌉ failing is at most, If none of the collections C 1 , . . . , C ⌈b ℓ ⌉ fail, then there must be at least ⌈b ℓ ⌉ emptier-wasted steps. Thus the expected number of emptier-wasted steps that occur during the phase is at least b ℓ /2.
In order to show that the expected number of emptier-wasted steps in phase ℓ is Ω(1) (at least, whenever m ℓ ≤ n log n), it suffices to show that expected bolus b ℓ is Ω(1) (conditioned on m ℓ ≤ n log n).
In order to prove a lower-bound on the bolus, we examine a related quantity that we call the variation. If t + 1 is the first step of the ℓ-cup phase, then the variation v ℓ of the ℓ-cup phase is defined to be, The variation v ℓ captures the degree to which the fills of cups 1, 2, . . . , ℓ + 1 differ from their equilibrium fills. The next lemma shows that, if the variation v ℓ is large, then so will be the bolus b ℓ in expectation. .
Proof. Let t + 1 be the first step in the ℓ-cup phase. By the definition of E ℓ , we have that, E ℓ (j). 23 Thus, Since the cups 1, 2, . . . , ℓ + 1 are randomly labeled, we have by symmetry that, . 1)), the proof of the lemma is complete.
Proof. Let t + 1 be the first step of the ℓ-cup phase. Recall that Note that the equilibrium state E ℓ depends on the amount of water m ℓ in cups 1, 2, . . . , ℓ + 1 at the beginning of step t. Let E = Ω(ℓ)]. To do this, we break the water placed by the filler into two parts: Let a j denote the amount of water placed into each cup j by the filler in the first n steps of the first phase, and let b j denote the amount of water placed into each cup j by the filler in steps n + 1, n + 2, . . . , t − 1. Finally, let c j denote the total amount of water removed from cup j by the emptier during steps 1, 2, . . . , t − 1.
The role of a j will be similar to that of the random offsets in the smoothed greedy emptying algorithm. Interestingly, these random offsets now work in the filler's favor, rather than the emptier's.
In order to prove that (7) holds with probability at least 1 − O(1/n 2 ), we show that the left side of (7) is tightly concentrated around its mean. If the X j 's were independent of one another then we could achieve this with a Chernoff bound. Since the X j 's are dependent, we will instead use McDiarmid's inequality.
We can now complete the proof of Theorem 6.6.
6.3. Giving the Emptier a Time Stamp. In this section, we show that unending guarantees continue to be impossible, even if the score-based emptier is permitted to change their algorithm based on a global time stamp.
A dynamic score-based emptying algorithm A is dictated by a sequence X 1 , X 2 , X 3 , . . . , where each X i is a score-based emptying algorithm. On step t of the cup game, the algorithm A follows algorithm X t .
Define the extended oblivious fuzzing filling algorithm to be the oblivious fuzzing filling strategy, except that each phase's length is increased to consist of T (n) steps, where T is a sufficiently large function of n (that we will choose later). Theorem 6.14. Consider a cup game on n cups. Suppose the emptier is a dynamic score-based emptier. Suppose the filler follows the extended oblivious fuzzing filling algorithm. Then at the beginning of the n 2/3 -cup phase, the average fill of cups 1, 2, . . . , n 2/3 is Ω(log n), in expectation.
Understanding when two score-based algorithms can be treated as "equivalent". We say that the cups 1, 2, . . . , n are in a legal state if each cup contains an integer multiple of 1/2 water, and the total water in the cups is at most n log n. By the assumption that the emptier is backlog-bounded, and that the filler follows the extended oblivious fuzzing filling algorithm, we know that the cup game considered in Theorem 6.14 will always be in a legal state.
Let L denote the set of legal states. For each score-based emptying algorithm X , we define the behavior vector B(X ) of X to be the set B(X ) = {(L, k) | X empties from cup k when the cups are in state L ∈ L}.
Note that, for some states L ∈ L, the emptier X may choose not to empty from any cup-in this case, L does not appear in any pair in B(X ).
The behavior vector B(X ) captures X 's behavior on all legal states. If B(X ) = B(X ′ ) for two score-based emptying algorithms X and X ′ , then we treat the two emptying algorithms as being the 26 same (since their behavior is indistinguishable on the cup games that we are analyzing). This means that the number of distinct score-based emptying algorithms is finite, bounded by (n+1) |L| . We will use A to denote the set of distinct score-based emptying algorithms. Formally, each element of A is an equivalence class of algorithms, where each score-based algorithm is assigned to an equivalence class based on its behavior vector B(X ).
Associating each phase with a score-based algorithm that it "focuses" on. In order to analyze the ℓ-cup phase of the extended oblivious fuzzing filling algorithm, we break the phase into segments, where each segment consists of 2n log n · |A| steps. For each segment, there must be an algorithm A ∈ A that the emptier uses at least 2n log n times within the segment. We say that the segment focuses on emptying algorithm A.
Let K denote the number of segments in the ℓ-cup phase. By the pigeon-hole principle, there must be some algorithm A ∈ A such that at least K/|A| of the segments in the ℓ-cup phase focus on A. We say that the ℓ-cup phase focuses on algorithm A.
For each ℓ ∈ {1, . . . , n}, let A ℓ denote the score-based emptying algorithm that the ℓ-cup phase focuses on (if there are multiple such algorithms A ℓ , select one arbitrarily).
Our proof of Theorem 6.14 will analyze the ℓ-cup phase by focusing on how the phase interacts with algorithm A ℓ . This is made difficult by the fact that, between every two steps in which the emptier uses algorithm A ℓ , there may be many steps in which the emptier uses other score-based emptying algorithms.
Defining the equilibrium state and bolus of each phase. We define the equilibrium state E ℓ and the bolus b ℓ of the ℓ-cup phase to each be with respect to the score-based emptying algorithm A ℓ . That is, E ℓ is the equilibrium state for algorithm A ℓ for the cups 1, 2, . . . , ℓ + 1 in which the total amount of water (in those cups) is m ℓ + 1 (recall that m ℓ is the amount of water in cups 1, 2, . . . , ℓ + 1 at the beginning of the ℓ-cup phase). Using this definition of E ℓ , the bolus is b ℓ = max(0, r − E ℓ (ℓ + 1)), where r is the amount of water in cup ℓ + 1 at the beginning of the ℓ-cup phase.
The key to proving Theorem 6.14 is to show that, if m ℓ ≤ n log n, then the expected number of emptier-wasted steps in the ℓ-cup phase is at least Ω(b ℓ ). That is, we wish to prove a result analogous to Lemma 6.7 from Section 6.2. Lemma 6.15. Let ℓ ∈ [n − 1], condition on m ℓ ≤ n log n, and condition on some value for b ℓ . Under these conditions, the the expected number of emptier-wasted steps in the ℓ-cup phase is at least b ℓ /2.
Proof. Call a step t in the ℓ-cup phase equilibrium-converging if either: • The emptier uses algorithm A ℓ during step t, and for each cup j that the filler places water into, the fill x of cup j after the water is placed satisfies x ≤ E ℓ (j). • The emptier uses an algorithm A = A ℓ during step t, and the filler places all of their water (i.e., a full unit) into the cup j whose score (as assigned by the score-based algorithm A) is largest at the beginning of step t.
The first case in the definition of equilibrium-converging steps is similar to that in the proof of Lemma 6.7. The second case, where the emptier uses an algorithm A = A ℓ is different; in this case, the definition guarantees that the step is either emptier-wasted or is a no-op (meaning that the water removed by the emptier during the step is exactly the same as the water placed by the filler). Call a step in the ℓ-cup phase a convergence-enabling if the total amount of water in cups 1, 2, . . . , ℓ is less than ℓ i=1 E ℓ (i) − 1 at the beginning of the step. Just as in the proof of Lemma 6.7, convergence-enabling steps have two important properties: The Convergence Property: For any convergence-enabling step t, there is some pair of cups j, k (possibly j = k) that the filler can place water into in order so that the step is equilibrium converging. Thus, whenever a convergence-enabling step occurs, there is probability of at least 1/ℓ 2 that the step is equilibrium converging. The Bolus Property: At the beginning of any convergence-enabling step, the amount of water in cup ℓ + 1 must be greater than E ℓ (ℓ + 1). This is a consequence of the fact that the total amount of water in cups 1, . . . , ℓ + 1 is at least We now prove a claim analogous to Claim 6.8. Claim 6.16. Suppose m ℓ ≤ n log n, and consider a segment S in the ℓ-cup phase that focuses on A ℓ . If S begins with a convergence-enabling step, and every step in S is equilibrium converging, then S must contain an emptier-wasted step.
Proof. There are two types of steps in S: (1) equilibrium converging steps where the emptier uses algorithm A ℓ , and (2) equilibrium converging steps where the emptier does not use A ℓ . All type (2) steps are either emptier-wasted or are no-ops (meaning that they do not change the state of the cup game). On the other hand, because segment S focuses on A ℓ , there must be at least 2n log n type (1) steps. Assuming no type (2) steps are emptier wasted, then the type (1) steps meet the conditions for Claim 6.8 (i.e., the type (1) steps meet the conditions that are placed on L i in the claim). Thus, by Claim 6.8, at least one of the steps in S is emptier-wasted, as desired.
We can now complete the proof of the lemma. For each segment S that focuses on A ℓ , if the number of emptier-wasted steps prior to S is less than ⌈b ℓ ⌉, then the first step of S is convergenceenabling (and any steps in S up until the first emptier-wasted step in S are also convergence enabling). By the Convergence Property and Claim 6.16, it follows that S has probability at least p := 1/ℓ 2|S| = 1/ℓ 2n log n|A| of containing an emptier-wasted step.
If K is the number of segments in a phase, then at least K ′ = K/|A| of the segments in phase ℓ must focus on algorithm A ℓ . Denote these segments by S 1 , . . . , S K ′ . Break the ℓ-cup phase into collections C 1 , C 2 , . . . , C 2n log n of time segments, where each C i contains K ′ /(2n log n) of the S i 's. Say that a collection C i fails if fewer than ⌈b ℓ ⌉ emptier-wasted steps occur prior to C i , and no emptier-wasted step occurs during C i . Since each C i contains at least K ′ /(2n log n) segments that focus on A ℓ , the probability of C i failing is at most, (8) (1 − p) K ′ /(2n log n) .
Assuming the K is sufficiently large as a function of n, then the exponent in (8) is also sufficiently large as a function of n, and thus (8) is at most 1/(4n log n). By a union bound over the C i 's, it follows that the probability of any C i failing is at most 1/2. On the other hand, if none of the C i 's fail, then at least ⌈b ℓ ⌉ steps must be emptier-wasted (here, we are using the fact that the number of collections C i is 2n log n ≥ m ℓ ≥ ⌈b ℓ ⌉). Thus the expected number of emptier-wasted steps is at least ⌈b ℓ ⌉/2.
We can now prove Theorem 6.14.
Proof of Theorem 6.14. The proof follows exactly as for Theorem 6.6, except that Lemma 6.7 is replaced with Lemma 6.15.
6.4.
Unending guarantees with small resource augmentation. In this section we show that, even though resource augmentation ε > 0 is needed to achieve unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms, the amount of resource augmentation that is necessary is substantially smaller than was previously known. In particular, we prove unending guarantees when ε = 2 − polylog n . Theorem 6.17 states an unending guarantee for the smoothed greedy emptying algorithm, using ε = 2 − polylog n . Theorem 6.17. Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter ε ≥ 2 − polylog n , then each step t achieves backlog O(log log n) with probability 1 − 2 − polylog n (where the exponent in the polylog is a constant of our choice). Theorem 6.18 states an unending guarantee for the asymmetric smoothed greedy emptying algorithm, using ε = 2 − polylog n . Theorem 6.18. Consider a single-processor cup game in which the emptier follows the asymmetric smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter ε ≥ 2 − polylog n , then each step t achieves tail size O(polylog n) and the backlog O(log log n) with probability 1 − 2 − polylog n (where the exponent in the polylog in the probability is a constant of our choice).
We begin by proving Theorem 6.17. Call a step t a rest step if the emptier removes less than 1 unit of water during that step. The next lemma shows that rest steps are relatively common. Lemma 6.19. Consider a single-processor cup game in which the emptier follows either the smoothed greedy emptying algorithm. Any sequence of n/ε + 1 steps must contain a rest step.
Proof. Whenever a step is a rest step, it must be that every cup contains less than 1 unit of water, meaning that the total amount of water in the system is at most n. On the other hand, during each non-rest step, the amount of water in the system decreases by at least ε. It follows that, if there are k non-rest steps in a row, then the total amount of water in the system after those steps is at most n − kε. Thus the number of non-rest steps that can occur in a row is never more than n/ε, as desired.
In order to prove Theorem 6.17, we will exploit the following result of Bender et al. [11] which analyzes the smoothed greedy algorithm for ε = 0: Theorem 6.20 (Bender et al. [11]). Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Moreover, suppose ε = 0. For any positive constants c and d, and any t ≤ 2 log c n , step t has backlog O(log log n) with probability at least 1 − 2 − log d n .
Although Theorem 6.20 only applies to the first 2 polylog n steps of a game, we can use it to prove the following lemma. Define a fractional reset to be what happens if one reduces the fill f j of each cup j to f j − ⌊f j ⌋. That is, the fills of the cups are decreased by integer amounts to be in [0, 1). The next lemma shows that, if a cup game is fractionally reset after a given step j, then the following steps j + 1, j + 2, . . . , j + 2 polylog n are guaranteed to have small backlog. Lemma 6.21. Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Consider a step t 0 , and suppose that, after step t 0 , the cup system is fractionally reset. Then for any positive constants c and d, and any t ≤ t 0 + 2 log c n , step t has backlog O(log log n) with probability at least 1 − 2 −2 log d n .
Proof. For each cup j, let r j be the random initial offset placed into cup j by the smoothed greedy emptying algorithm, and let c j be the total amount of water placed into cup j by the filler during steps 1, 2, . . . , t 0 . Because the emptier always removes water in multiples of 1, they never change the fractional mount of water in any cup (i.e., the amount of water modulo 1). It follows that the fractional amount of water in each cup j is given by, q j := c j + r j (mod 1).
Since r j is uniformly random in [0, 1), the value q j is also uniformly random in [0, 1). Moreover, because the initial offsets r j are independent of one another, so are the values q j .
Because the values q j are independent and uniformly random in [0, 1), they can be thought of as initial random offsets for the smoothed greedy emptying algorithm. Thus, if each cup j is reset to have fill q j after step t, then the following steps can be analyzed as the first steps of a cup game in which the emptier follows the smoothed greedy emptying algorithm. The claimed result therefore follows from Theorem 6.20.
We now prove Theorem 6.17.
Proof of Theorem 6.17. Consider a step t, and let d be a large positive constant. For each step t 0 ∈ {t − n/ε, . . . , t}, Lemma 6.21 tells us that if a fractional reset were to happen after step t 0 , then step t would have probability at least 1 − 2 − log d n of having backlog O(log log n). By a union bound, it follows that if a fractional reset were to happen after any of steps t − n/ε, . . . , t, then step t would have probability at least 1 − (n/ε)2 log d n of having backlog O(log log n). Supposing that d is a sufficiently large constant, this probability is at least 1 − 2 log d/2 n . By Lemma 6.19, at least one step t 0 ∈ {t − n/ε, . . . , t} is a rest step. This means that, at the end of step t 0 , every cup contains less than 1 unit of water. In other words, the state of the system after step t 0 is the same as if the system were to be fractionally reset. It follows that, for any constant d, the backlog after step t is O(log log n) with probability at least 1 − 2 log d/2 n . This completes the proof.
The proof of Theorem 6.18 follows similarly to the proof of Theorem 6.17. Rather than using Theorem 6.20, we instead analyze the case of ε = 0 using the following version of Theorem 4.10: Theorem 6.22. Consider a single-process cup game that lasts for 2 polylog n steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in n, the number of cups containing 2 or more or units of water never exceeds O(polylog n) and the backlog never exceeds O(log log n) during the game.
We now prove Theorem 6.18.
Proof of Theorem 6.18. The proof follows in exactly the same way as for Theorem 6.17, except that Theorem 6.22 is used in place of Theorem 6.20.
6.5. Tight lower bounds on resource augmentation for smoothed greedy. Theorems 6.17 and 6.18 give unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms using resource augmentation ε = 1/2 polylog n . Theorem 6.23 shows that such guarantees cannot be achieved with smaller resource augmentation.
Theorem 6.23. Consider a single-processor cup game on n cups. Suppose ε = 1/2 log ω(1) n , and suppose the emptier follows either the smoothed greedy emptying algorithm or the asymmetric smoothed greedy emptying algorithm. Then there is an oblivious filling strategy that causes there to be a step t at which the expected backlog is ω(log log n).
To prove Theorem 6.23, we will have the filler follow the oblivious fuzzing filling algorithm on min(1/ log ε −1 , n) cups. Rather than placing water in multiples of 1/2, however, the filler now places water in multiples of 1/2 − ε/2 (in order so that the total water placed in each step is 1 − ε).
30
The fact that ε > 0, however, makes it so that we cannot directly apply Theorem 6.6. Thus, in order to prove Theorem 6.23, we must first prove that the resource augmentation ε = 1/2 log ω(1) n is so small that, with high probability, it does not have a significant effect on the game by step t * .
In order to bound the impact of resource augmentation on the emptier, we exploit the random structure of the emptier's algorithm, and use that random structure to show that the emptier's behavior is robust to small "perturbations" due to resource augmentation. Lemma 6.24. Consider resource augmentation ε = 2 −2 ω(1) n and consider a step t * ≤ ε −1/10 . With probability at least 1 − O( √ ε), the resource augmentation ε = 1/2 log ω(1) n does not affect the emptier's behavior during the first t * steps.
Proof. We begin with a simple observation: the total amount of resource augmentation during the first t * steps is εt * ≤ ε 9/10 . Call this the Net-Augmentation Observation. For each step t, define S t to be the state of the cup game after the t-th step without resource augmentation, and define S ′ t to be the state of the cup game with resource augmentation ε = 1/2 log ω(1) n (note that, in both cases, the emptier follows the same variant of the smoothed greedy algorithm using the same random initial offsets). Let S t (j) (resp. S ′ t (j)) denote the fill of cup j, after the filler has placed water in the j-th step, but before the emptier has removed water (note that, when discussing fill, we include the random initial offset placed by the emptier in each cup).
Suppose that, during steps 1, 2, . . . , t − 1, the emptier's behavior is unaffected by the resource augmentation. The only way that the emptier's behavior filler in step t can be affected by the resource augmentation is if either: • Case 1: ⌊S t (j)⌋ = ⌊S ′ t (j)⌋ for some cup j. By the Net-Augmentation Observation, it follows that (S t (j) mod 1) ∈ [−ε 9/10 , ε 9/10 ]. • Case 2: S t (j) < S t (k) but S ′ t (k) < S ′ t (j) for some cups j and k. By the Net-Augmentation Observation, it follows that S t (k) − S t (j) < ε 9/10 . We will show that the probability of either Case 1 or Case 2 happening is at most O(ε 9/10 n 2 ). It follows that the probability of resource augmentation affecting the emptier's behavior during any of the first t * steps is at most O(ε 9/10 n 2 t * ) ≤ O( √ ε).
Rather than directly bounding probability of either Case 1 or Case 2 occurring on step t, we can instead bound the probability that either, (10) (S t (j) mod 1) ∈ [−ε 9/10 , ε 9/10 ] for some cup j, or that (11) S t (k) − S t (j) < ε 9/10 for some cups j, k. Recall that the values S t (1), S t (2), . . . , S t (n) modulo 1 are uniformly and independently random between 0 and 1; this is because the emptier initially places random offsets r j ∈ [0, 1) into each cup j, which permanently randomizes the fractional amount of water in that cup. Thus the probability that (S t (j) mod 1) ∈ [−ε 9/10 , ε 9/10 ] for a given cup j is O(ε 9/10 ), and the probability that S t (k) − S t (j) < ε 9/10 for a given pair of cups j, k is also O(ε 9/10 ). By union-bounding over all cups j (for (10)) and over all pairs of cups j, k (for (11)), we get that the probability of either (10) or (11) occurring is O(ε 9/10 n 2 ), as desired.
We can now complete the proof of Theorem 6.23.
Proof of Theorem 6.23. For each step t, define S t and S ′ t as in Lemma 6.24. By Theorem 6.6 and (9), there exists t * ≤ 2Õ (1/ √ log ε −1 ) ≤ ε −1/10 for which the expected backlog in S t * is ω(log log n). By the Net-Augmentation Observation (from Lemma 6.24), it follows that the expected backlog in S ′ t * is at least ω(log log n−ε 9/10 ) ≥ ω(log log n), which completes the proof. | 2021-04-13T01:15:49.085Z | 2021-04-12T00:00:00.000 | {
"year": 2021,
"sha1": "26ccb2de6aa2f38ec54e8963c4d1984279e1e996",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3406325.3451033",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "39ba20fb5e8b193392e74efe1b8ef4676a61e2ef",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
118672239 | pes2o/s2orc | v3-fos-license | Photometric study of IC 2156
The optical UBVRI photometric analysis has been established using SLOAN DIGITAL SKY SURVEY (SDSS database) in order to estimate the astrophysical parameters of poorly studied open star cluster IC 2156. The results of the present study are compared with a previous one of ours, which relied on the 2MASS JHK infrared photometry. The stellar density distributions and color-magnitude diagrams of the cluster are used to determine the geometrical structure; limited radius, core and tidal radii, the distances from the Sun, from the Galactic plane and from the Galactic center. Also, the main photometric parameters; age, distance modulus, color excesses, membership, total mass, luminosity, mass functions and relaxation time; have been estimated.
INTRODUCTION
Open star clusters are important celestial bodies in understanding star formation and stellar evolution theories. Color-magnitude Diagram (CMD) analysis through isochrones gives us good estimates of the astrophysical parameters of the clusters, e.g. age, reddening and distance. In the last decades, many studies have been performed using different techniques; started from photographic photometry to the charge-coupled device (CCD) photometry, and finally employing many isochrones models. The large amount of results, which produced in the literature are gathered in catalogs and databases, e.g. Webda 1 and Dias 2 . In this context, we have presented some contributions of papers (Tadross 2011;Tadross 2009a;Tadross 2009b;Tadross 2008a;Tadross 2008b).
The present study depends mainly on the last version of SDSS database (SDSS DR12) 3 , which provides homogeneous ugriz photometry for stars in the northern sky. The most important reason for using SDSS database lies in the ugriz point-spread function (PSF) photometry for setting the zero points of UBVRI, Chonis & Gaskell (2008). The cluster IC 2156 was studied in JHK 2MASS system by Tadross (2009b) among 11 previously unstudied open star clusters. Fig. 1 displays the image of this target. This paper is organized as follows. Data extraction is presented in Section 2, while the data analysis and parameter estimations are described in Sections 3. Finally, the results and conclusion of our study are summarized in Section 4.
DATA EXTRACTION
The open star cluster IC 2156 located at J2000.0 coordinates α = 06 h 04 m 51 s , δ = +24 • 09 ′ 30 ′′ , ℓ = 186.291 • , b = 1.297 • . We extracted ugriz PSF magnitude of all stars around the center of the cluster in a radius of 10 arcmin from the SDSS data release 12 (DR12) by Alam et al. (2015). The SDSS survey conducted with the CCD of the 2.5 m telescope at Apache Point (New Mexico, USA). We had to convert ugriz magnitudes into the UBVRI photometric system (Johnson-Cousins) using Chonis & Gaskell (2008). The standard errors of the transformation equations for U, B, V, R and I are 0.007, 0.007, 0.005, 0.005 and 0.009 respectively. Fig. 2 represents the magnitude errors in each filter ugriz.
Although the apparent diameter of the cluster is less than 5 arcmin, the downloaded data are taken to be exceeded that diameter, i.e. about 10 arcmin because it should be reached the background field stars. To get net worksheet data for the investigating cluster, the photometric completeness limit has been applied to the photometric pass-band SDSS data to avoid over-sampling of the lower parts of the cluster's CMDs (cf. Bonatto et al. 2004). Stars with observational uncertainties ≥ 0.20 mag have been removed. In addition, stellar photometric membership criteria are adopted based on the location of the stars within ± 0.1 mag around the zero age main sequence (ZAMS) curves in the CMDs (Clariá & Lapasset 1986).
Cluster's Radial Density Profile
To establish the radial density profile (RDP) of IC 2156, the area is divided into central concentric circles with bin sizes R i ≤ 1 arcmin, from the cluster center. The number density, R i , in the i th zone is calculated by using the formula of R i = N i /A i where N i is the number of stars and A i is the area of the i th shell. The star counts of the next steps should be subtracted from the previous ones, so that we obtained only the amount of the stars within the relevant shell's area, not a cumulative count. The density uncertainties in each shell were calculated using the relative error of Poisson. We applied the empirical King model (1966), parameterizing the density function ρ(r) as: where f bg , f 0 and r c are background, central star density and the core radius of the cluster respectively. The cluster's limiting radius can be defined at that radius which covers the entire cluster area and reaches enough stability with the background field density. Because of strong field stars contamination, it is not possible to completely separate all field stars from cluster members. The limiting radius of the cluster can be described with an observational border, which depends on the spatial distribution of stars in the cluster and the density of the membership and the degree of field-star contamination. Fig. 3 shows the RDP of the cluster IC 2156, the limited radius, core radius and the background field density are shown in the figure. Finally, knowing the cluster's total mass (Sec. 3.4), the tidal radius can be calculated by applying the equation of Jeffries et al. (2001): where R t and M c are the tidal radius and total mass of the cluster respectively.
The main photometry
Here, we determined the main astrophysical parameters of the cluster, i.e. color excesses, age and distance modulus of IC 2156. Firstly we used some reliable data of stars with high membership probability (i.e. stars with good precision and located very close to the cluster's center) in order to derive the reddening value from the Color-Color, (U-B)-(B-V), diagram. The color excesses are found to be E(B-V)= 0.55 mag and E(B-U)= 0.39 mag, as shown in the upper panel of Fig. 4. Secondly, we determined the age and distance modulus of the cluster by fitting isochrones to the Colormagnitude diagrams CMDs of the cluster. Several fittings on the CMDs of the cluster have been applied using the stellar evolution models of Girardi et al. (2010) of Padova isochrones, as shown in the lower panel of Fig. 4.
It is worth mentioning that the assumptions of solar metallicity are quite adequate for young and intermediate age open clusters, which are close to the Galactic disk. However, for a specific age isochrones, the fit is obtained at the same distance modulus (12.30 mag) and the same age (250 Myr Under the assumption of R gc⊙ = 8.34 ± 0.16 kpc of Reid et al. (2014), which is based on high precision measurements of the Milky Way, the distance from the Galactic center R gc is estimated for IC 2156 and found to be 11.20 kpc. Also, the projected distances on the Galactic plane from the Sun (X ⊙ & Y ⊙ ) and the distance from the Galactic plane (Z ⊙ ), are determined to be 2865, -315, and 65 pc respectively, see Table 1. For more details about the geometry Galactic distances calculations, see Tadross (2011).
Luminosity functions
It is difficult to determine the membership of a cluster using only the stellar RDP. It might be claimed that most of the stars in the inner concentric rings are quite likely members, whereas the external rings are more intensely contaminated by field stars. Therefore, the stars, which are closed to the cluster's center and near to the main-sequence (MS) in CMDs are taken to be the stellar membership of the clusters. These MS stars are very important in determining the luminosity, mass functions and the total mass of the investigated cluster. For this purpose, we obtained the Luminosity Functions (LF) of the cluster by summing up the V band luminosities of all stars within the determined limiting area of the cluster. Before building the LF, we converted the apparent V band magnitudes of the cluster members into the absolute magnitude value using the distance modulus of the cluster. We constructed the histogram of LF to include a reasonable number of stars in each absolute V magnitude bins for the best counting statistics; see
Mass function and total mass
The mass functions (MF) of the cluster is built using the theoretical evolutionary tracks and their isochrones at the specific age of the cluster. The masses of the cluster members can be derived from the polynomial expression developed by Girardi et al. (2010) with solar metallicity.
The LF and MF are correlated to each other according to the known Mass-luminosity relation. The accurate determination of both of them (LF & MF) suffers from the field star contamination, membership uncertainty, and mass segregation, which may affect even poorly populated, relatively young clusters (Scalo 1998). On the other hand, the properties and evolution of a star are closely related to its mass, so the determination of the initial mass function (IMF) is needed. IMF is an empirical relation that describes the mass distribution of a population of stars in terms of their theoretical initial mass. The IMF is defined in terms of a power law as follows: dM is the number of stars of mass interval (M:M+dM), and α is a dimensionless exponent. The IMF for massive stars (> 1 M ⊙ ) has been studied and well established by Salpeter (1955), where α = 2.35. This form of Salpeter shows that the number of stars in each mass range decreases rapidly with increasing mass. It is noted that the investigated MF slope ranging of IC 2156 consideration is found to be -2.7, which is somewhat around the Salpeter's value as shown in Fig. 6.
To estimate the total mass of the cluster, the mass of each star has been estimated from a polynomial equation developed from the data of the solar metallicity isochrones (absolute magnitudes versus actual masses) at the age of the cluster. The sum of products of the number of stars in each bin by the mean mass of that bin yields the total mass of the cluster, which is found to be 310 M ⊙ .
Dynamical state and relaxation time
The time in which the cluster needs from the very beginning to build itself and reach the stability state against the contraction and destruction forces is known as the relaxation time of the cluster (T relax ). This time is depending mainly on the number of members and the cluster diameter. To describe the dynamical state of the cluster, the relaxation time can be calculated in the form: T relax = N 8 ln N T cross where T cross = D/σ V denotes the crossing time, N is the total number of stars in the investigated region of diameter D, and σ V is the velocity dispersion (Binney & Tremaine 1998) with a typical value of 3 km s −1 (Binney & Merrifield 1987). Using the above formula we estimated the dynamical relaxation time for IC 2156, which found to be 5.5 Myr. It means that IC 2156 is indeed dynamically relaxed.
CONCLUSION
The open star cluster IC 2156 has been extracted using ugriz filter of the SDSS survey, converted to UB-VRI using transformation equations of Chonis & Gaskell (2008). This open cluster has been studied before using JHK pass-band of 2MASS database. We compared our astrophysical parameters of the cluster with Tadross (2009b), the results are summarized and listed in Table 1. Some differences have been occurred at the distances of the cluster from the Sun; from the Galactic center R gc ; from the Sun's projection location on the Galactic plane (X ⊙ & Y ⊙ ) and from the Galactic plane (Z ⊙ ) as well. | 2016-01-12T06:17:03.000Z | 2015-11-10T00:00:00.000 | {
"year": 2016,
"sha1": "d2ddee34a72121d2bf1ba0e9f6b732d56f9bb132",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d2ddee34a72121d2bf1ba0e9f6b732d56f9bb132",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118639197 | pes2o/s2orc | v3-fos-license | A Novel Reflectometer for Relative Reflectance Measurements of CCDs
The high quantum efficiencies (QE) of backside illuminated charge coupled devices (CCD) has ushered in the age of the large scale astronomical survey. The QE of these devices can be greater than 90 %, and is dependent upon the operating temperature, device thickness, backside charging mechanisms, and anti-reflection (AR) coatings. But at optical wavelengths the QE is well approximated as one minus the reflectance, thus the measurement of the backside reflectivity of these devices provides a second independent measure of their QE. We have designed and constructed a novel instrument to measure the relative specular reflectance of CCD detectors, with a significant portion of this device being constructed using a 3D fused deposition model (FDM) printer. This device implements both a monitor and measurement photodiode to simultaneously collect incident and reflected measurements reducing errors introduced by the relative reflectance calibration process. While most relative reflectometers are highly dependent upon a precisely repeatable target distance for accurate measurements, we have implemented a method of measurement which minimizes these errors. Using the reflectometer we have measured the reflectance of two types of Hamamatsu CCD detectors. The first device is a Hamamatsu 2k x 4k backside illuminated high resistivity p-type silicon detector which has been optimized to operate in the blue from 380 nm - 650 nm. The second detector being a 2k x 4k backside illuminated high resistivity p-type silicon detector optimized for use in the red from 640 nm - 960 nm. We have not only been able to measure the reflectance of these devices as a function of wavelength we have also sampled the reflectance as a function of position on the device, and found a reflection gradient across these devices.
REFLECTOMETER DEVICE
The reflectometer was designed to measure the specular reflectance relative to a silicon calibration standard at wavelengths from 380 nm to 1100 nm. To perform a measurement a collimated beam of light 8 mm in diameter is incident upon a target surface at an angle 15 • from normal. The effects upon the reflectance due to polarization can be neglected as the collimated beam is not polarized and these effects are also negligible at a 15 • angle from normal incidence, the reflectance is taken to be the average of the s and p polarizations.
The main structure of the reflectometer was 3D printed from acrylonitrile butadiene styrene (ABS) plastic. The 3D printing fabrication process allowed for greater design freedom, and the final structure was both lightweight and rigid. A cross section view of the reflectometer is shown in Fig.1. The illumination source for the reflectometer is a Horiba iHR-320 monochromator which is equipped with both a quartz tungsten halogen lamp (QTH) and a xenon arc lamp. The monochromator allows for the reflectance measurements to be performed through a range of wavelengths, and a slit to round Fiberguide fiber optic couples the monochromator to the reflectometer. The exit of the fiber illuminates a ground glass diffuser, which in turn illuminates a second ground glass diffuser. This pair of diffusers is intended to create a uniform illumination pattern, with the second diffuser being imaged by the pinhole. The pinhole is imaged by an aspheric lens, creating a collimated beam for the incident illumination of the sample. The required distance from the pinhole to the collimating lens is dependent upon wavelength. Over the intended operational bandpass of the reflectometer the aspheric lens to object distance varies by approximately 2 mm. To minimize the effects of the shifting focus distance aspheric lenses which are optimized for a narrower bandpasses were selected. The aspheric lenses were mounted at their required distances from the pinhole, and each has a broadband anti-reflective (AR) coating with bandpasses optimized for 350 nm -700 nm, 650 nm -1050 nm, and 1050 nm -1700 nm. The collimated beam is reflected off the surface to be measured, and the reflection is collected by the signal photodiode. The active surface of the signal photodiode is 6.6 mm square and is overfilled by the 8 mm diameter collimated beam. The signal photodiode is mounted on an X-Y linear stage to allow for minor adjustments to the alignment of the photodiode to the collimated beam. The monitor diode is placed to measure the output of the second ground glass diffuser. Figure 1: Reflectometer cross section view. A slit to round fiber couples the reflectometer to a monochromator source. The fiber illuminates a pair of ground glass diffusers to create an even illumination pattern. The second ground glass diffuser is imaged by a pinhole, and the pinhole is imaged by an aspheric lens. The aspheric lens creates a collimated beam which is incident upon the surface to be measured 15 • from normal. Finally the collimated beam is reflected back to a signal photodiode. A monitor diode is placed to measure the output of the second ground glass diffuser.
The photodiodes used for the signal and monitor measurements are OSI PIN-44DPI. The PIN-44DPI is a photovoltaic photodiode, which offers low noise at the expense of response speed. The response of the photodiode is shown in Fig.2. A Keithley 6482 dual channel pico-ammeter is used to measure the photodiode currents simultaneously. Fig.3 are the noise distributions for the entire signal chains of the signal and monitor photodiodes with zero illumination, and the noise does not significantly impact reflectance measurements.
Reflectance measurements are performed in a Wenzel XO-87 coordinate measuring machine (CMM). A test cryostat is mounted in the CMM, and serves as the fixture for holding detectors. The reflectometer is attached to the CMM's ram, which allows for the reflectometer to be precisely positioned to perform measurements. Since the reflectance measurements are performed through the test cryostats window, and the reflectance of these devices can be as low as a few percent, the dewar windows can have a significant impact upon a reflectance measurement. The cryostat windows have been coated with a broadband AR coating to minimize the reflections from the window.
REFLECTANCE MEASUREMENT TECHNIQUE
Reflectance is the fraction of incident electromagnetic power that is not transmitted at an interface. The reflectometer uses a monitor diode to measure the incident power, and a signal diode to measure the reflected power. Using a monitor diode minimizes errors due to temporal variations in the illumination source. The top panel in Fig.4 are the measured monitor and signal currents for a reflectance measurement. A reflectance measurement is taken to be the ratio of the signal to monitor current, the second panel in Fig.4 is the ratio of the currents. For a typical reflectance measurement the incident power is approximately 100 times greater than the reflected power. The reflectometer measures reflectance relative to a silicon standard. The standard is a 2" silicon wafer which has been chemically mechanically polished (CMP) on one side. Since the reflectometer measures specular reflectance the calibration standard needs to have a similar surface roughness. Shen et al. 1 measured the bi-directional reflection distribution function (BRDF) for a range of silicon wafer roughnesses, and showed for surfaces where the Rayleigh criterion holds minimal amounts of light are diffusely scattered. By the Rayleigh criterion a surface is optically smooth if ∆h < λ/8, where h is the RMS surface roughness and λ is the wavelength of the incident light. While it is clear the CMP polished surface of the silicon standard is well below the Rayleigh criterion, the surface roughness of the device to be measured has not been quantified. But typically the wet chemical etching processes used to thin silicon decrease surface roughness, thus it is a reasonable assumption that the devices to be measured are also well below the Rayleigh criterion.
Using the Fresnel equations, along with the indices of refraction (n) and extinction coefficients (κ ) for silicon taken from Green, 2 the reflectance of silicon can be calculated. In Fig.4 the third panel shows the calculated reflectance of silicon. To derive a reflectance from a measured current ratio a wavelength dependent calibration factor is necessary. The calibration factor is found by dividing the silicon calculated reflectance by the silicon measured current ratio. The bottom panel in Fig.4 is an example of the calibration factor used to convert a current ratio into a reflectance. The reflectance of the device to be measured is found by multiplying the current ratio by the calibration factor.
Relative reflectance measurements are susceptible to errors due to differing alignments of the calibration standard and the surfaces to be measured. Lesser 3 uses an optical method by which to precisely align the surface to be tested to the same position as the calibration standard. In our reflectometer the distance to the target from the reflectometer affects where the collimated beam is reflected onto the signal photodiode. To overcome this limitation the CMM is used to measure reflectance through a range of target distances, with the peak signal taken to represent the measured reflectance. In Fig.5 the current ratio is plotted versus shifts in distance to the target, and the peak signal is used to calculate the reflectance.
CCD REFLECTANCE MEASUREMENTS
Two types of Hamamatsu CCDs have been measured for their reflectance, a blue optimized CCD and red optimized CCD. The blue optimized CCD is a Hamamatsu part number S10892-1628(X), which is a backside illuminated fully depleted p-type device with a format of 4192 x 2048 15 µm pixels and is fabricated on 100micron thick high-resistivity silicon with a broadband AR coating applied to the backside. These devices are detailed in Kamata et al. 4 and Gunn et al. 5 The red optimized detector is a Hamamatsu S10892-1629(X), and is similar to the blue with the same geometry and mask, but it is fabricated on 200 µm thick high-resistivity silicon, and has an AR coating optimized for the red applied to the backside. Fig.6 is the measured reflectance of a blue optimized CCD, and Fig.7 is the measured reflectance of a red optimized CCD. Fabricius et al. 6 shows that between 500 -900 nm the QE = 1 − R and more generally QE ≤ 1 − R. Fig.8 is the 1 − R quantum efficiency for the blue optimized CCD, and Fig.9 is the 1 − R quantum efficiency for the red optimized CCD. Kamata et al. 4 measured the QE of the red optimized devices, and their QE curve is nearly identical to the reflectance measured QE from 600 nm -950 nm. The intrinsic QE of the devices as measured by Kamata et al. 4 begin to rapidly drop past 1,000 nm as the wavelengths approach the cutoff wavelength of silicon, and the measured reflectance is unable to capture this behavior.
The precise positioning ability of the CMM also allows for the measurement of the reflectance across the extent of a device to reveal reflectance gradients. The diameter of the reflectometer's collimated beam is 8 mm and the photodiode is 6.6 mm square, thus a reflectance measurement is sampling an area which is similar in extent to the photodiode. The Hamamatsu CCDs have an active area approximately 30 mm wide in the serial direction and 60 mm long in the parallel direction. The surface reflectance was sampled using three stripes spaced 10 mm apart, where a stripe runs in the parallel direction, and each stripe is sampled every 3 mm. The reflectance of the surfaces were measured through a range of wavelengths, with the blue optimized CCDs being measured from 350 nm -700 nm at a 10 nm interval, and the red optimized CCDs were measured from 600 nm -1,100 nm at a 50 nm interval. The reflectance was found to be a slowly varying function of wavelength. Fig.10 is the reflectance across a blue optimized CCD taken at 400 nm. Fig.11 is the reflectance across the surface of a red optimized CCD taken at 950 nm. Both CCDs show reflectance gradients in the parallel direction which has been attributed to the coating process, and the variation in reflectance in the serial direction is minimal. The shape and magnitude of the reflectance gradient is wavelength dependent.
CONCLUSION
We have built a novel reflectometer which used 3D printing to fabricate the super structure. The 3D printing process offers more design freedom, and dramatically reduces manufacturing times. By implementing a monitor and signal diode we have minimized measurement errors due to the temporal variations of the illumination source, and the relative reflectance method. The reflectometer measures reflectance relative to a silicon standard, and the errors due to the positioning differences between the standard and the device under test have been minimized by measuring the reflectance through a range of target distances and taking the peak value as the reflectance measurement.
The reflectance of red and blue optimized Hamamatsu CCDs has been measured, and reflectance measured QE have been computed. The reflectance measured QE of the red optimized CCD matches the measured QE found for these devices given in other publications. We are currently preparing to measure the QE of the blue optimized devices, and we will then contrast the QE found by the two different techniques.
Having the ability to measure the reflectance across the surface of the CCD has allowed us to quantify the reflectance gradient as a function of position and wavelength. We have measured a gradient in reflectance which is directed in the parallel direction of the CCDs, and the size and shape of the gradient has been found to be wavelength dependent. The reflectance gradient is attributed to the process by which the AR coating is applied. | 2016-08-03T11:57:26.000Z | 2016-07-27T00:00:00.000 | {
"year": 2016,
"sha1": "b14b590b6231b7195e93526afae1220cbba1364d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.01159",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9f235c7d13b8dd759bc1c05227913d09c5388611",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
]
} |
226304058 | pes2o/s2orc | v3-fos-license | Unraveling the Genotype‐Phenotype Relationship in Hypertrophic Cardiomyopathy: Obesity‐Related Cardiac Defects as a Major Disease Modifier
Abstract Hypertrophic cardiomyopathy (HCM) is the most common inherited cardiomyopathy and is characterized by asymmetric septal thickening and diastolic dysfunction. More than 1500 mutations in genes encoding sarcomere proteins are associated with HCM. However, the genotype‐phenotype relationship in HCM is incompletely understood and involves modification by additional disease hits. Recent cohort studies identify obesity as a major adverse modifier of disease penetrance, severity, and clinical course. In this review, we provide an overview of these clinical findings. Moreover, we explore putative mechanisms underlying obesity‐induced sensitization and aggravation of the HCM phenotype. We hypothesize obesity‐related stressors to impact on cardiomyocyte structure, metabolism, and homeostasis. These may impair cardiac function by directly acting on the primary mutation‐induced myofilament defects and by independently adding to the total cardiac disease burden. Last, we address important clinical and pharmacological implications of the involvement of obesity in HCM disease modification.
H ypertrophic cardiomyopathy (HCM) is the most common inherited cardiomyopathy, with an estimated prevalence of 1:500 to 1:200 and a frequent cause of sudden cardiac death in young individuals. 1 HCM is clinically defined by increased left ventricular (LV) wall thickness (>15 mm) that cannot be attributed solely to abnormal loading conditions. 2 The most prominent clinical features of HCM include LV outflow tract obstruction caused by asymmetric septal thickening and diastolic dysfunction. 2 Histological analyses of cardiac tissue from patients with HCM show cardiomyocyte hypertrophy and disarray, fibrosis, and reduced capillary density. [3][4][5] In 50% to 60% of all patients with HCM, a pathogenic variant (mutation) is found in genes encoding sarcomeric proteins, the contractile machinery of cardiomyocytes. In this case, patients are termed genotype positive. Over 1500 mutations have been identified to be associated with HCM, most of which affect thick-filament genes (MYH7, MYBPC3, MYL2, and MYL3) and to a lesser extent thin-filament genes (TNNT2, TNNI3, TPM1, and ACTC1). [6][7][8] As the vast majority of genotype-positive patients carries a heterozygous mutation, HCM is considered to be an autosomal dominant disease. It must be noted that the number of identified gene variants of unknown significance increased as a result of the larger diagnostic gene panels in clinical practice. 9,10 These newly identified variants of unknown significance may be pathogenic or rather be a disease modifier. Understanding the exact contribution to HCM pathophysiology of newly identified gene variants is part of ongoing research. In the current review, we Nollet et al Obesity as Major Disease Modifier in HCM focus on established pathogenic variants, and use the term mutation.
Whereas it is well established that HCM is caused by sarcomere mutations, the phenotypical variation in terms of disease penetrance and severity is large in genotype-positive individuals; heterozygous mutation carriers may remain asymptomatic their entire life, while a first-degree relative may develop severe hypertrophy at a young age and may progress to end-stage heart failure (HF). 2,11,12 Disease models are not ideal to recapitulate such heterogeneity: mouse models with a heterozygous sarcomere gene mutation do not develop a cardiac disease phenotype at young age, whereas homozygous mice show early and accelerated cardiac dysfunction. [13][14][15][16] The latter pathogenic effect of sarcomere mutations is also evident from human cases with homozygous or compound heterozygous mutations that show severe cardiomyopathy at birth and death at childhood. 17,18 These studies show that the dose of the mutant sarcomere protein, which is regulated at RNA and protein level, determines the onset and severity of cardiomyopathy.
Based on the observation that harboring a heterozygous mutation is by itself not sufficient to initiate and drive disease progression, it has been hypothesized that HCM development is tightly intertwined with additional or secondary disease-modifying factors. [19][20][21] These additional disease hits may either directly aggravate mutation-related dysfunction by affecting the cell systems that maintain cardiomyocyte homeostasis aimed to prevent accumulation of mutant protein or impair cardiac function independently of mutation-related cardiomyocyte dysfunction.
CLINICAL REPORTS ON THE IMPACT OF OBESITY ON HCM PREVALENCE, PHENOTYPE, AND OUTCOME
In recent years, cohort studies yielded significant insight in the involvement of obesity in phenotypic expression of HCM (Figure 1). High prevalence of obesity in patients with HCM was reported by Reineck and colleagues. 24 In patients with HCM who responded to a survey of health behaviors, mean body mass index (BMI) was >30 kg/m 2 and prevalence of obesity was 43%, which are both significantly higher than in the general US population. 24 These findings were later confirmed in large-scale international multicenter registries of patients with HCM (ie, the SHaRe 32,33 This raises the question whether obesity adversely modifies HCM disease expression, or rather is a reflection of an increase in sedentary behavior after diagnosis. 35 Indeed, although patients with HCM are advised to regularly perform nonstrenuous exercise, 2,36 most patients indicate they do not meet physical activity recommendations because of physical discomfort, fear of sudden cardiac death, or misinterpreted medical advice. 24,37 The hesitance of patients to perform physical activity may thus contribute to the observed increase in body weight, and the sedentary lifestyle may thereby have a negative impact on disease progression. Recent evidence specifically supports the potential of high body weight to adversely predispose individuals to develop HCM. 29,30,34 The nationwide register-based prospective cohort studies in Sweden observed that high BMI in young adulthood was a predictor of developing HCM or other cardiomyopathies later in life. Among men conscripted for military service, obesity displayed a hazard ratio (HR) of 3.17 to 3.39 for being diagnosed with HCM compared with lean body weight. 29 Strikingly, each 1-unit increase in BMI was associated with a 9% increase in the risk of being diagnosed with HCM. 29 In women of childbearing age, obesity was associated with a nearly 3 times higher risk (HR, 2.60-2.77) versus normal BMI, and a 6% increase per 1-unit increase in BMI was reported. 30 As pointed out by the authors, the finding that high BMI before disease onset is associated with a greater chance of being diagnosed with HCM in late adulthood implies that obesity-induced cardiac stress may sensitize and aggravate mutation-induced myocardial defects, resulting in phenotypic expression of HCM. 30 Similar findings were reported in a recent study using nationwide population-based data from the Korean National Health Insurance Service. 34 Over a median follow-up of 5.2 years, individuals with a BMI >30 kg/m 2 had a 3 times higher risk (HR, 3.00) of being diagnosed with HCM compared with lean individuals, and each 1-unit increase in BMI displayed an 11% risk increase.
Nonstandard Abbreviations and Acronyms
In addition to modifying disease penetrance, obesity is associated with a worse phenotype and clinical course, as demonstrated by several studies. 23,[25][26][27][31][32][33] In terms of clinical presentation, obese patients with HCM display notably differing functional and morphological features compared with lean patients with HCM (summarized in Table 1 and Figure 1). Obesepatients with HCM are more symptomatic, as evaluated by New York Heart Association (NYHA) functional class, but also present more frequently with a significant LV outflow tract obstruction. 23,25,27,32 The functional limitation in obese patients with HCM is also manifested by lower exercise tolerance and capacity compared with nonobese patients. 25,27 Moreover, obesity is associated with a higher LV mass index, LV cavity enlargement, larger left atrial diameter, and greater posterior wall thickness. 23,25,27,38 The latter is also observed in obese pediatric patients with HCM. 31 With respect to the association between BMI and maximal LV wall thickness (ie, typically septal thickness), the studies cited here suggest a modest impact of obesity, requiring vast sample sizes for detection. 32,33 No difference was reported in ejection fraction between obese and lean patients with HCM. 23,25,27,32 Two studies have described associations between BMI and (long-term) clinical outcomes. 23,32 Olivotto and colleagues report no difference in survival between lean and obese patients during a median follow-up of 3.7 years. However, obesity was found to be an independent predictor (HR, 3.6) of developing NYHA ≥III functional class symptoms. 23 Also, in a larger cohort with a median follow-up of 6.8 years, Fumagalli et al found higher incidence of NYHA ≥III symptoms at last visit (10% versus 16%; P<0.001) and atrial fibrillation during follow-up (19% versus 24%; P=0.03) in obese compared with lean patients with HCM. 32 Compared with lean patients, obese patients more often developed the HF composite outcome (defined as LV ejection fraction <35%, development of NYHA class III/IV symptoms, cardiac transplant, or LV assist device implantation 39 ; lean 19% versus obese 30%; P<0.001). In addition, compared with lean patients, obese patients more frequently developed the HCM-related overall composite outcome (defined as first occurrence of any ventricular arrhythmic event or HF composite end point [without inclusion of LV ejection fraction], all-cause mortality, atrial fibrillation, and stroke 39 ; lean 42% versus obese 55%; P<0.001). 32 Moreover, obesity was independently also positively associated with the HF composite outcome (HR, 1.89) and the HCM-related overall composite outcome (HR, 1.63). 32 Occurrence of ventricular arrhythmias did not display an association with obesity, suggesting that obese patients with HCM are not at increased risk of sudden cardiac death. However, the authors emphasize that, because of low event rates, longitudinal follow-up studies are needed to draw definite conclusions about arrhythmic risk in obese patients with HCM. 32,39 Of note, in the general population, obesity is associated with other cardiovascular risk factors, such as hypertension and type 2 diabetes mellitus (DM-II). 22,40 Increased prevalence of these conditions by BMI group is also a universal finding in the clinical reports discussed here. However, it was not reported or, because of small sample sizes, not possible to thoroughly study how hypertension and DM-II may independently influence baseline clinical phenotype and disease course.
Nollet et al
Obesity as Major Disease Modifier in HCM Nevertheless, higher BMI has been associated with new-onset HF, regardless of etiology. 41 Solely with respect to LV mass index and exercise tolerance an independent positive association with hypertension was demonstrated. 23,25 Evidence supporting a negative impact of obesity-related cardiovascular risk factors on HCM disease expression and progression comes from 3 studies. 21,28,34 The Korean nationwide study addressing the relationship between BMI and HCM diagnosis during follow-up substratified 3 BMI groups (<23, 23.0-24.9, and >25 kg/m 2 ) by metabolic status (ie, metabolically healthy versus metabolically unhealthy, as defined by presence of hypertension, hyperlipidemia, or diabetes mellitus). 34,42 In each BMI group, it was observed that metabolically unhealthy participants had an approximately 1.5 times higher HR for being diagnosed with HCM compared with metabolically healthy participants. In a cohort of MYL2 mutation carriers (n=38), hypertension was a strong independent risk factor for HCM manifestation. Moreover, presence of any risk factor for hypertrophy, such as obesity, was found in 89% of all patients. 21 The impact of DM-II on clinical phenotype and outcome (Table 1) was studied in a matched cohort composed of diabetic and nondiabetic patients with HCM from Spanish and Israeli referral centers (n=294). 28 Compared with nondiabetic patients, diabetic patients with HCM more often displayed left atrial enlargement, diastolic dysfunction, and mitral regurgitation. Patients with HCM with DM-II additionally displayed worse NYHA functional class symptoms and lower exercise capacity. No significant differences were reported with respect to ventricular arrhythmic events in patients with HCM with DM-II. Clinical course was reported to be more severe in diabetic patients with HCM, as evidenced by a significantly higher 15-year mortality rate (non-DM-II 15% versus DM-II 22%; P=0.03, log-rank test). 28 Taken together, these studies display a clear negative impact of obesity, and its associated comorbidities, on HCM disease expression and progression. The question that therefore arises concerns the mechanisms by which obesity impacts on cardiac function, causing this phenomenon. Because obesity is known to drive LV hypertrophy and diastolic dysfunction in the general population, 43 it may be hypothesized that obesity promotes phenotypic expression and progression of HCM by impairing cardiac function in parallel with mutation-induced impairments. Alternatively, or in addition, obesity-related myocardial stress may drive HCM by enhancing mutation-induced pathogenic effects.
SARCOMERE INEFFICIENCY AT THE BASIS OF HCM PATHOGENESIS
A brief overview of the current understanding of the pathophysiology of HCM is required to interpret clinical A range of functional changes have been described as a consequence of sarcomere mutations, which are schematically depicted in Figure 2 and briefly summarized in the following paragraph. Mutations in sarcomere proteins cause increased Ca 2+ sensitivity of the myofilaments, increased tension cost, 16,[44][45][46][47][48][49] and altered myosin sequestration, 50,51 which together lead to increased ATP use. Increased Ca 2+ sensitivity induces myofilament activation at relatively low Ca 2+ levels and delays the dissociation of Ca 2+ from cardiac troponin C, resulting in prolonged cross-bridge activation and impaired relaxation. Increased tension cost entails that in HCM cardiomyocytes more ATP is hydrolyzed to generate the same amount of force compared with healthy cardiomyocytes. Altered myosin sequestration refers to a smaller portion of myosins achieving the superrelaxed state conformation during diastole, which is associated with increased ATPase activity at low [Ca 2+ ] and prolonged duration of relaxation. Increased ATP consumption caused by sarcomere mutations has been proposed to propel HCM development via several self-reinforcing mechanisms. 19,[52][53][54][55][56] Elevated ADP levels as a result of ATP depletion are thought to play a pivotal role herein. High ADP levels directly stimulate mitochondrial ATP regeneration, 57 which in the healthy heart is accompanied by increased mitochondrial calcium uptake to boost activity of the Krebs cycle needed to fuel ATP regeneration and detoxify concomitant reactive oxygen species (ROS) formation. 58,59 In HCM, it has been postulated that high mitochondrial workload caused by ATP depletion is not matched by a proper increase in mitochondrial [Ca 2+ ] due to Ca 2 sequestration in the myofilaments. 56 Reductions in Figure 2. Proposed pathophysiology of hypertrophic cardiomyopathy. Mutant protein gives rise to sarcomere inefficiency, disturbed calcium homeostasis, and diastolic dysfunction. This evokes mitochondrial dysfunction and oxidative stress, raising mutant protein levels via inhibition of protein quality control mechanisms, which aggravates cardiomyocyte dysfunction. This self-reinforcing feedback loop ultimately promotes prohypertrophic and fibrotic cardiac remodeling. During disease development, desensitization of the β-adrenergic receptor (β-AR) occurs, which causes reduced myofilament protein phosphorylation and contributes to sarcomere dysfunction. See main text for a more elaborate description. ROS indicates reactive oxygen species.
Nollet et al
Obesity as Major Disease Modifier in HCM mitochondrial [Ca 2+ ] will reduce antioxidative capacity and affect the ability to adequately buffer ADP. 56 Increased ROS production and reduced antioxidative capacity give rise to excessive ROS levels and culminate in oxidative stress, damaging macromolecules and organelles and adversely modifying a plethora of redox-sensitive signaling pathways and proteins that potentially drive HCM disease progression. 19 In brief, oxidative stress and concomitant oxidative modifications may (1) (5) induce ubiquitin-proteasome system dysfunction and endoplasmic reticulum stress, raising mutant protein dose. [70][71][72] These pathogenic effects further disrupt myofilament function and/or contribute to ROS production, disturbing cardiomyocyte homeostasis and inducing prohypertrophic and fibrotic signaling. The apparent observation of higher mutant protein levels at more advanced disease stages is in line with such a feed-forward mechanism in HCM pathophysiology. 73 A particularly pathogenic factor in HCM disease progression that needs to be highlighted here is diastolic dysfunction. Diastolic dysfunction may initially be caused by relatively high cross-bridge activity during diastole as a result of increased myofilament Ca 2+ sensitivity, and likely worsened through ADPmediated Ca 2+ sensitization, 74 oxidative stress, 62 and reduced β-adrenergic receptor signaling. 75,76 Severe diastolic dysfunction may cause microvascular dysfunction, as coronary perfusion takes place during diastole, 77 and ultimately leads to local ischemia, tissue death, and replacement fibrosis, which dramatically alters the already disturbed redox balance in cardiomyocytes. 78 In summary, energetic and metabolic stress appears to be a central consequence of the sarcomere mutation-induced cardiomyocyte defects.
OBESITY IN HCM: PARALLEL OR ENHANCING EFFECT?
As mentioned, obesity may impact on HCM phenotype and disease course by affecting cardiac function independently of mutation-induced effects, adding to the total cardiac disease burden, whereas it can also be hypothesized that obesity enhances mutation-induced pathogenic effects. The general finding of a positive association between BMI and NYHA class 25,28,32 is for example also observed in patients with HF with preserved ejection fraction, 79 thus not necessarily suggesting a direct impact of obesity on mutation-related defects in HCM.
In the general population (ie, in the absence of HCM), cardiac remodeling associated with obesity is predominantly reflected by higher LV mass index, larger LV cavity size, and diastolic dysfunction. 43 Also in patients with HCM, the most notable impact of obesity appears to be higher LV mass index, LV cavity enlargement, and worse diastolic dysfunction, the latter being reflected by a larger left atrial diameter. 23,25,32 These findings seem to argue mostly in favor of a parallel effect of obesity on the HCM myocardium. However, Rayner and colleagues recently reported that in HCM the degree of LV cavity dilatation associated with increasing BMI was 2-fold larger than in nondiseased hearts. 38 In addition, the increase in LV mass index associated with an increase in BMI was higher in hearts with HCM compared with nondiseased hearts (1.3 versus 2.3 g/kg per m 2 , respectively), although the difference in slope was not statistically significant (P=0.10). The finding that the heart with HCM seems to dilate excessively to increase stroke volume might suggest that the presence of a sarcomere mutation diminishes the capacity of the myocardium to cope with obesity-related increased physiological demand and stress.
In addition, LV outflow tract obstruction, a characteristic feature of HCM, is more common in obese patients with HCM. 23,25,27,32 As obesity is related with increased LV mass, the typical asymmetric septal hypertrophy may be more severe in obese than in lean patients with HCM. However, clinical studies on the impact of obesity on HCM report either no or only a modest difference in septal thickness between lean and obese patients. 23,25,27,32,33 In fact, in a subpopulation of patients with HCM with a (likely) pathogenic mutation (n=1035), Fumagalli and colleagues report no effect of obesity on maximal LV wall thickness. The mean maximal septal thickness in the aforementioned genotype-positive cohort was relatively high (20 mm), which may indicate that septal remodeling was already too advanced to detect a large obesity-mediated effect on LV mass. 32 Interestingly, in a small cohort (n=32) with a mean maximal septal thickness of 17 mm, a positive association was found between septal thickness and truncal fat. 26 However, the number of genotype-positive patients was not reported in this study; thus, it remains unclear if the observed association would hold true in a strictly genotype-positive patient population. Of note, the overall positive association between BMI and maximal LV wall thickness reported by Fumagalli et al was ascribed to observations made in mutation-negative patients with HCM, 32 suggesting a direct influence of obesity on septal thickness in the absence of sarcomeric mutations. Notable differences in cardiac remodeling and morphological features have recently been reported between sarcomere mutation-positive and mutation-negative patients with HCM, 33 warranting further study into the mechanisms underlying this phenomenon. Taken together, assessment of the role of obesity on cardiac remodeling in genotype-positive individuals is challenging, and warrants prospective follow-up studies in genotype-positive, phenotype-negative mutation carriers.
OBESITY-RELATED CARDIAC DEFECTS AS SECOND DISEASE HIT IN HCM: POSSIBLE PATHOMECHANISMS
Obesity and its associated comorbidities may induce and aggravate HCM via multiple mechanisms that have been described in obesity-related cardiac dysfunction and diabetic cardiomyopathy, and range from vascular dysfunction to structural changes and perturbations in cardiomyocyte homeostasis and metabolism. We provide an overview of described mechanisms, with a possible link to HCM. The proposed interplay of obesity-related cardiac defects and mutation-induced pathomechanisms is schematically visualized in Figure 3.
Endothelial Dysfunction and Inflammation
We put forward that endothelial dysfunction and inflammation may be important mediators that aggravate cardiomyocyte dysfunction in HCM pathophysiology. It has been proposed that a systemic proinflammatory state caused by comorbidities, such as obesity and diabetes mellitus, underlies endothelial dysfunction. 80 In brief, microvascular endothelial inflammation stimulates profibrotic signaling by fibroblasts and induces cardiomyocyte stiffness and hypertrophy via reduced NO bioavailability and protein kinase G activity. The net result thereof is hypertrophic remodeling, diastolic dysfunction, and impaired coronary flow reserve, 81 contributing to HCM pathophysiological features in several ways. As highlighted earlier, diastolic dysfunction and reduced coronary perfusion may be particularly pathogenic in HCM disease progression because of their redoxdisturbing and ischemic effects. Vascular dysfunction has been observed in hearts of patients with HCM, in particular in patients with a gene mutation, and is thought to precede development of cardiac hypertrophy, as evidenced by blunted coronary flow in response to adenosine in nonhypertrophied regions of the heart. [82][83][84][85] Extrinsic factors contributing to microvascular dysfunction may thus bear exceptional potential to set off pathologic remodeling in HCM. Intriguingly, the notion of vascular dysfunction especially in mutation-positive patients, implies that presence of mutant protein causes vascular remodeling (eg, via oxidative stress-induced profibrotic signaling), resulting in increased adventitial collagen deposition. 19,86,87 Oxidative stress as a result of NO synthase uncoupling and nicotinamide adenine dinucleotide phosphate oxidase activity in endothelial dysfunction may increase mutant protein dose via ubiquitin-proteasome system dysfunction, 70,88,89 and therefore possibly represents a self-reinforcing mechanism through which endothelial dysfunction and disturbed cardiomyocyte homeostasis impact on one another.
Cardiac adiposity may represent an important mediator of local myocardial inflammation and endothelial dysfunction. Recent studies suggest the existence of direct interactions between epicardial adipose tissue and the myocardium, 90,91 and abdominal adiposity has been associated with new-onset HF. 92 The epicardial fat volume was associated with the degree of cardiac hypertrophy and severity of diastolic dysfunction, and circulating biomarkers related to myocyte injury. 90 These findings indicate that there is direct communication between epicardial fat and the myocardium. Myocardial lipid accumulation has been recognized as a source of proinflammatory adipokines and cytokines, 93 contributing to impaired vasodilation, cardiac stiffening, and remodeling, and is associated with lipotoxicity, which is detrimental to cardiomyocyte homeostasis. 94 The association between obesity and the onset and progression of HCM therefore may be explained by myocardial adiposity, either through interactions between epicardial fat and the myocardium or rather by direct intramyocardial accumulation of fat. 95 Clearly, comprehensive knowledge on the role of obesity-related systemic changes and coincident endothelial dysfunction and inflammation in the development of HCM is absent, and warrants research.
Obesity-Induced Hemodynamic Alterations and Cardiac Hypertrophy
Obesity is characterized by changes in hemodynamics and cardiac remodeling, 43,96 which hold several implications for the myocardium with HCM. Obesity is associated with increased LV mass and frequently displays concentric LV geometry. 43,97 Hypertrophic stimuli driving LV remodeling may impact on cellular homeostasis and mechanisms aimed at preventing incorporation and accumulation of mutant protein, thereby eliciting mutation-related pathogenicity. For example, the mechanistic target of rapamycin (mTOR), a major regulator of protein synthesis and Nollet et al Obesity as Major Disease Modifier in HCM cardiomyocyte growth that is upregulated in DM-II and obesity, negatively modulates ubiquitin-proteasome system and autophagic activity. 98 In the HCM cardiomyocyte, this would entail increased production of mutant protein, but reduced clearance, raising mutant protein dose. Interestingly, in a MYBPC3targeted knock-in mouse model of HCM, activation of autophagy by rapamycin administration or caloric restriction improved disease phenotype, 99 highlighting the importance of proper proteostasis in preventing HCM disease development. Moreover, in obese individuals, cardiac output and workload are elevated because of increased circulating blood volumes and, in the case of coinciding hypertension, increased afterload. 43,100 Mutant protein-harboring cardiomyocytes in HCM are already faced with increased mitochondrial workload and concomitant stress because of high ATP use by inefficient sarcomeres, 16,[44][45][46][47][48][49][50][51] which thus may be exacerbated by sustained elevated cardiac workload due to hemodynamic changes. In addition, missense mutations in HCM are characterized by impaired length-dependent activation, 44 which likely limits contractile reserve of the heart during episodes of augmented preload. Sustained obesity-induced preload elevation may therefore lower the threshold for compensatory hypertrophy in HCM. The correlation between septal thickness and amount of truncal fat, but not total body fat or epicardial fat in patients with HCM, observed by Guglielmi and colleagues, is in line with the notion of a hemodynamics-mediated effect on cardiac remodeling. 26 Interestingly, epicardial fat amount was associated with N-terminal prohormone of brain natriuretic peptide levels. 26 Together, these observations imply the Increased adrenergic drive in obesity accelerates β-adrenergic receptor (β-AR) desensitization. Increased preload and/or afterload raise mitochondrial workload. Metabolic overfueling and cardiac adiposity promote endothelial dysfunction and inflammation, and induce lipotoxicity, glucotoxicity, and oxidative stress. Endothelial dysfunction and inflammation further raise oxidative stress, aggravate diastolic dysfunction and perfusion defects, and promote hypertrophy and fibrosis. Protein quality control is impaired by metabolic overfueling and left ventricular (LV) remodeling, raising mutant protein levels. BCAA indicates branched chain amino acid; and ROS, reactive oxygen species.
Nollet et al
Obesity as Major Disease Modifier in HCM importance of both hemodynamic alterations and epicardial fat accumulation in phenotypical presentation of HCM.
Sympathetic Nervous System Activation in Obesity
Symptomatic HCM with LV outflow tract obstruction is characterized by a high adrenergic drive. 101,102 Chronic β-adrenergic receptor overstimulation leads to receptor downregulation and desensitization of this pathway, 76 which accordingly is observed and reflected by several defects in HCM. 44,48,101,102 In human myectomy tissue, low phosphorylation of cardiac troponin I, a downstream target of protein kinase A, was observed, which causes increased Ca 2+ sensitivity and impaired length-dependent activation of the myofilaments. 44,48,103,104 Studies in a mouse model of HCM revealed that this phenomenon may be explained by selective phosphorylation of protein kinase A targets under conditions of β-adrenergic desensitization. 105 These human and mouse studies led to the concept of defective β-adrenergic receptor signaling as an important second disease hit in HCM disease progression. 19 In obese and diabetic individuals, overactivity of the sympathetic nervous system is a common feature. 106,107 Thus, adrenergic receptor stimulation via this route may add up to the already increased adrenergic drive in HCM, leading to premature impairment of β-adrenergic signaling pathways and further deterioration of cardiomyocyte function. In addition, β-adrenergic stimulation has also been described to evoke oxidative stress, 108,109 therefore representing an additional mechanism through which obesity-induced sympathetic nervous system activation may disrupt cardiomyocyte homeostasis.
Obesity-Related Changes in Cardiac Metabolism
Metabolic changes associated with obesity, such as altered substrate preference and presence of toxic metabolites and intermediates, represent additional mechanisms through which obesity may impact on HCM pathophysiology. In obese, and in particular in diabetic individuals, hyperlipidemia and hyperinsulinemia (ie, metabolic overfueling) result in an increased delivery of fatty acids to the myocardium. [110][111][112] As a result, cardiac metabolism loses substrate flexibility and becomes more reliant on fatty acid oxidation, which may be detrimental to the heart in multiple aspects. [113][114][115] Mitochondrial ATP production through fatty acid oxidation is less efficient than glucose oxidation in terms of the number of ATP molecules produced for each oxygen atom consumed (phosphate/oxygen ratio, 2.33 versus 2.53, respectively 115 ); in the HCM cardiomyocyte, such an imbalance in energy production may exacerbate mutation-related perturbations of the mitochondrial capacity to regenerate ATP. In addition, disproportionate fatty acid oxidation increases expression of uncoupling proteins, 113 further compromising mitochondrial ATP production. Importantly, despite the increase in fatty acid oxidation compared with glucose oxidation, the uptake of fatty acids exceeds fatty acid oxidation capacity and results in the intracellular accumulation of lipids. 116 These lipids can be converted into toxic lipid species (eg, diacylglycerol and ceramides), which cause lipotoxicity. 115,117,118 Lipotoxicity is associated with numerous deleterious effects, such as oxidative stress, mitochondrial dysfunction, apoptosis, endoplasmic reticulum stress, and inflammation. 94,[119][120][121] In addition, lipid overload elevates the level of acetyl-CoA precursors, 122 which has been observed in skeletal muscle in DM-II and obesity and in failing hearts. 123,124 As pointed out by Fukushima and Lopaschuk,125 this may depress autophagic activity in the heart, as increased acetyl-CoA negatively regulates autophagy. 126 As discussed above, inhibition of autophagy may reduce clearance of mutant protein, thus raising mutant protein levels. Obesity has also been associated with increases in epicardial and intramyocardial fat in patients with HF, suggesting that the lipid accumulation is a general pathological cardiac response. 90,91,95 Hyperglycemia-induced glucotoxicity may occur in obesity and in particular in DM-II, which could also contribute to disease progression in HCM. High glucose exposure further promotes oxidative stress via nicotinamide adenine dinucleotide phosphate oxidase activation and mitochondrial ROS formation. 127 Hyperglycemia moreover induces formation and myocardial deposition of advanced glycation end products, which promotes inflammation and diastolic dysfunction. 128,129 Activation of the hexosamine biosynthetic and polyol pathways may in addition stimulate prohypertrophic signaling and oxidative stress. 115,130,131 Last, the possible impact of branched chain amino acids (BCAAs) on cardiometabolic risk has recently gained interest. 132 In obese and diabetic individuals, circulating BCAAs are typically increased because of dietary intake and may accumulate in the myocardium in the case of metabolic perturbations. 133 BCAAs have been hypothesized to promote ROS formation, proinflammatory signaling, and mTOR activation in the myocardium. 133,134 Recent analyses in The Hong Kong Diabetes Register demonstrated circulating BCAA levels to be independently associated with incident HF in patients with DM-II, 135 warranting further study into the mechanisms by which BCAAs may affect the myocardium. Altogether, a wide variety of metabolic perturbations associated with obesity and DM-II may negatively impact on the HCM myocardium and thus Nollet et al Obesity as Major Disease Modifier in HCM likely represents a major adverse modifier of HCM development and progression.
THERAPEUTIC AND CLINICAL IMPLICATIONS Lifestyle Interventions
Currently, there are no pharmacological treatment options available to cure or prevent HCM, although (ongoing) clinical trials aimed at altering contractile abnormalities and improving metabolism show promise. [136][137][138] Current treatment strategies predominantly include use of β-blockers and antiarrhythmic drugs and surgical myectomy to ameliorate LV outflow tract obstruction, which are thus mostly aimed at management of symptoms and complications. 2 The drastic impact of obesity and its associated comorbidities on the mutation-harboring myocardium described in this review highlights the importance of weight loss and control in the clinical management of HCM. Given the association between high BMI at young age and the risk of developing HCM later in life, 29,30 maintaining a healthy body weight may prevent or delay symptomatic expression of HCM in a significant proportion of mutation carriers. Accordingly, weight loss could substantially improve the clinical course in obese individuals with manifest HCM. It has been well established that weight loss following diet and/or exercise improves functional capacity in HF patients with preserved ejection fraction. 139,140 However, reports on the benefit of weight loss in HCM are lacking. At the moment, there is one case report that demonstrated a significant amelioration of the clinical phenotype and partial regression of cardiac hypertrophy following weight loss in a 17-yearold obese boy with apical HCM. 141 Furthermore, studies testing the effect of exercise on the myocardium in patients with HCM are scarce, which is likely due to safety concerns about exercise in patients with HCM. 142 Nonetheless, the limited number of studies report good safety of exercise and consistently observe a positive effect on functional capacity and clinical outcome. [142][143][144][145][146] Cavigli and colleagues recently formulated several key recommendations for exercise in patients with HCM that reinforce the notion that exercise is safe and potentially beneficial for patients with HCM. Nevertheless, adequately powered clinical trials are required to determine the effect of exercise and weight loss on myocardial remodeling and clinical outcomes in patients with HCM. 142
HCM With Diabetes Mellitus
Weight loss remains a core component of all lifestyle interventions in patients with DM-II as it improves glycemic control and disease progression. 147,148 The therapeutic effects of metformin, which has been the cornerstone of pharmacological treatment of DM-II for decades, are for instance strongly linked to its effects on weight loss. 149 The therapeutic landscape of DM-II has, however, dramatically changed in recent years following several large randomized outcome trials with sodium-glucose cotransporter 2 inhibitors (SGLT2i) and glucagon-like peptide 1 analogues. Both classes of drugs exert favorable effects on body weight and reduce the incidence of cardiovascular events and mortality in patients with DM-II. 148,150,151 SGLT2i have also consistently been shown to reduce the incidence of HF hospitalizations in patients with or at high risk for cardiovascular disease. 148 Mechanistically, glucagon-like peptide 1 analogues and SGLT2i exert a variety of roughly comparable systemic effects, including improvements in glycemic control, endothelial function, and blood pressure. In addition, both drug classes restore myocardial glucose oxidation in the diabetic heart. 150,151 Interestingly, effects of SGLT2i on glycemic control are modest, whereas their effects on the heart are profound and could involve direct cardiac effects. 151 Indeed, SGLT2i attenuate cardiac remodeling, reduce oxidative stress, and improve mitochondrial function in diabetic and nondiabetic animals. [151][152][153] Furthermore, SGLT2i increase the bioavailability of ketones and promote their cardiac use as an additional fuel source, thereby restoring cardiac ATP. 151,153,154 Finally, SGLT2i reduce the volume of epicardial adipose tissue, which may provide benefit to the heart as epicardial adipose tissue is thought to contribute to cardiac dysfunction through multiple mechanisms. 90,155 Glucagon-like peptide 1 analogues and SGLT2i are both recommended as possible first-line agents for the treatment of patients with DM-II at increased cardiovascular risk. These cardiometabolic effects of SGLT2i and to a lesser extent glucagon-like peptide 1 analogues suggest that they could also exert beneficial effects on the energy-depleted myocardium in patients with HCM. SGLT2i also reduce the incidence of HF hospitalizations, and patients with HCM are at increased risk of developing HF. One might thus argue that SGLT2i should be preferred as the treatment of choice for DM-II in patients with HCM. Evidence to support this concept is currently unavailable.
HCM Without Diabetes Mellitus
There are no evidence-based pharmacological therapies that target obesity-related cardiac defects in HCM. The SGLT2i dapagliflozin was recently shown to reduce the incidence of cardiovascular death and the progression of HF in nondiabetic patients with HF and Nollet et al Obesity as Major Disease Modifier in HCM reduced ejection fraction. 156 Since the cardiometabolic effects of SGLT2i are independent of the presence of diabetes mellitus, it is likely that these drugs could also benefit patients with HCM who develop HF with reduced ejection fraction. Nevertheless, most patients with HCM develop symptoms of HF with preserved ejection fraction or LV outflow tract obstruction and data to support the use of SGLT2i in these patients are lacking. The results of a large cardiovascular outcome trial in patients with HF and preserved ejection fraction are therefore eagerly awaited. 157
Targeting Metabolism in HCM
Metabolic therapy with compounds that inhibit mitochondrial fatty acid oxidation (eg, trimetazidine and perhexiline) may be effective as general treatment of HCM. 137,138,158,159 These drugs are thought to improve ATP regeneration by shifting mitochondrial metabolism away from fatty acid oxidation to more oxygen-efficient glucose oxidation. 158 This might be particularly beneficial during the early phase of disease development, since established HF is characterized by a major increase in anaerobic glycolysis. An ongoing clinical trial testing the effect of trimetazidine on myocardial efficiency in phenotype-negative MYH7 mutation carriers will yield more insight herein. 138 Boosting mitochondrial ATP production may aid the myocardium in coping with the increase in workload caused by primary mutationinduced myofilament defects and increased preload and afterload in the context of obesity. However, in patients with HCM with defective myocardial insulin signaling, such drugs may be ineffective because of impaired myocardial glucose uptake. In addition, inhibition of fatty acid oxidation without lowering circulating levels and myocardial uptake of fatty acids may make the heart subject to lipotoxicity, 160,161 possibly mitigating the positive effects of improved glucose oxidation. Reducing fatty acid uptake and oxidation may be achieved via inhibition of fatty acid translocase, 162 which has recently been demonstrated to hold therapeutic potential in the treatment of diabetic cardiomyopathy. 163 This strategy may therefore also be a future therapeutic target for HCM treatment, particularly in patients with coexisting DM-II.
CONCLUSIONS
Obesity is associated with increased HCM penetrance and is characterized in patients by a more severe phenotype and worse disease progression. Obesity and its associated comorbidities affect the myocardium, harboring sarcomere mutations via multiple mechanisms. These include neurohumoral activation, hemodynamic changes, LV remodeling, inflammation, perfusion defects, and metabolic perturbations, which may both sensitize mutation-induced defects and impair cardiac function independently. Gaining more insight in the interplay between obesity-and mutation-induced defects in HCM development and progression requires extensive (pre) clinical study. Clinically, we highlight body weight loss and control as a key component of patient management. Novel antidiabetic drugs and metabolic therapy aimed at improving glucose metabolism may be effective pharmacological treatment strategies in obesepatients with HCM.
Sources of Funding
We acknowledge support from the Netherlands Cardiovascular Research Initiative, an initiative with support from the Dutch Heart Foundation (CVON2014-40 DOSIS) and the Netherlands Organization for Scientific Research (NWO VICI, grant 91818602; NWO VIDI, grant 91713350; NWO VENI, grant 016176147). | 2020-11-12T09:07:08.359Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "7ca2711dd43d53f0feefcf037fa99ded785956d8",
"oa_license": "CCBYNC",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.120.018641",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c41b6146d917c55610697bd13c4f2cb3ab8b20bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222008141 | pes2o/s2orc | v3-fos-license | Inbuilt Characteristics of Hydrolytic Enzymes Activities in Root Tissues of Chickpea (Cicer arietinum L) Cultivars of Gujarat against Fusarium Wilt (Fusarium oxysporium f.spp ciceri) Disease Infection
Chickpea (Cicer arietinum L.) is the second most important pulse crop of the world. India is the world’s largest chickpea growing country having contributes about 63 percent to the global production of chickpea. Gujarat having cultivation area of 0.17 lakh hectares and an output of 0.09 metric tonnes with yield 530 kg/ha in 2000-01(Anon, 2003). Now it is not much change in yield. International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 6 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Chickpea (Cicer arietinum L.) is the second most important pulse crop of the world. India is the world's largest chickpea growing country having contributes about 63 percent to the global production of chickpea. Gujarat having cultivation area of 0.17 lakh hectares and an output of 0.09 metric tonnes with yield 530 kg/ha in 2000-01(Anon, 2003. Now it is not much change in yield.
International Journal of Current Microbiology and Applied Sciences
ISSN: 2319-7706 Volume 9 Number 6 (2020) Journal homepage: http://www.ijcmas.com A field experiment was conducted In Sick plot of Pulse research station, JAU., Junagadh. To find out the genetic makeup of hydrolytic enzymes of six chickpea cultivars that was grown in normal (healthy soil) and sick plot (Diseased soil) in response to wilt disease. The pattern of theses enzymes showed The β -1,3glucanase revealed that the significantly higher activity as the growth of plants and as the disease development, Plants grown in sick plot, the β -1, 3-glucanase activity in the different cultivars were varied between 89.02 to 368.42 µ mole glucose released.h-1.g-1.fr.wt. Susceptible cultivar JG-62 and GG-4 had significantly higher activity, than resistant and tolerant cultivars. Similarly In case of chitinase activity resulted significant differences in our experiments. The root tissues obtained from sick plot visualized higher chitinase activity as compared to the tissues received from normal plot. Among the cultivars, susceptible cultivars have similar trend as observed in β -1, 3-glucanase. Lower level of the enzyme activity in root tissues was observed in tolerant cultivars. Thus hypothesis on the basis of fungal invasion is also proved by hydrolytic enzymes and its can be classified as GG-1(tolerant), GG-2 (Tolerant), JG-62 (Highly susceptible), JCP-27, WR-315, (Highly Resistant) and GG-4 (Susceptible) to fusarium. It can be concluded that both β-1, 3-glucanase and chitinase activity having defensive role thus it can be used as biochemical marker for identification of fungal resistant cultivars.
Chick pea flour (besan) is an ingredient in various types of sweets and bhajiya. Chick pea is also having medicinal effects for blood purification. Its nutritional composition varies with varieties but averages are 21.1-22.8 per cent protein, 55-61.5 per cent carbohydrates,3-4.5 per cent fat. Chickpea is rich in calcium, iron and niacin. ( Rathod and Vakhariya, 2008) so far as In a data of the resequencing of 429 chickpea accessions from 45 countries identified key candidate genes that were under selection and those associated with agronomically important traits (Varshney et al., 2019) . Numerous approaches are taken from world scientist on crop improvements from genomics, proteomics, transcriptomics, as well as in recently suggested novel approach called -super-pangenome‖, which includes the development of pangenome or pangenomes of different species in a given genus, provides an opportunity to identify genus-level genomic variation (Khan et al., 2020). But without the Fundamental knowledge of enzymes works on pathogen and activities differ in different condition are very crucial to give most imperative wrapping up in given experiments. So Author has tried to justify the activities of both enzymes during inflectional, and normal condition in chickpea.
Wilt of chickpea (Cicer arietinum), caused by Fusarium oxysporum f. sp. ciceris is a major limiting factor of chickpea production in the Mediterranean Basin and the Indian Subcontinent (Jalali and Chand, 1992). Annual yield losses due to Fusarium oxysporum. f.sp. ciceri have been epidemics and devastating to individual crops and cause 100% loss under favorable conditions (Halila and Strange, 1996;Chaube and Pundhir, 2005). Defense reaction of the plant that is related to the mechanisms of cell wall modification is the rapid formation of papillae-localized appositions of dense material between the plasmalemma and the cell wall at the penetration site of the pathogen. They are composed of cross-linked proteins, phenolic compounds and callose, (Heitefuss, 1997). β-1,3-glucanase enzymes that act on other substrates present in the cell wall include invertase, peroxidase, phosphatase and various dehydrogenases.
Enzymes with potential activity against fungal pathogens include chitinases and β-1, 3glucanases (Cosgrove, 1997). Since the accumulation of PR-proteins and cell wall enforcement by oxidative cross-linking of structural proteins and the formation of papillae have been documented in the interaction between F. graminearum and wheat (Pritsch et al., 2000;Pritsch et al., 2001;El-gendy et al., 2001;Kang and Buchenauer, 2003), fungal proteases are almost certainly part of the interaction between the pathogen and the host. Therefore the present enzymes β-1, 3-glucanases and chitinase were examined biochemically to elucidate the changes in enzyme activities in various growth stages as well as to prove hypothesis of design of experiments in various cultivars for better used in predicted off climate situations.
Materials and Methods
Field experiments was conducted with the experimental design of split plot at Pulse research station , Junagadh Agricultural University Junagadh., Gujarat India. Two different separate plots as main factors, Three different stages were (preinfectional , inflectional and post inflectional stages) ,Varieties as split factors in both plots ,in which the Chickpea cultivars GG-1(tolerant), GG-2 (Tolerant), JG-62 (Highly susceptible), JCP-27, WR-315, (Highly Resistant) and GG-4 (Susceptible), were grown under field condition in two plots. One plot was i.e normal plot (Helathy) without diseased while other was kept free for infection of wilt disease in chickpea plants i.e, Sick plot (Diseased plot) that is maintaining, since 28 years for F.oxysporium f.sp.ciceri with appropriate inoculation and tested for diseased infection in the seasons as well as use for AICRP trial too. Root tissues were harvested with preconditioning of tap water and cleaning effects at pre infectional (12 Days after sowing), infectional (21 days after sowing) and post infectional (26 Days after sowing) stage from both plots. Sample was preserve in -80 0 C until analysis is over.
Extraction and Assay of activity for Chitinase assay (EC 3.2.1.14)
Determination of N-acetylglucosamine: (for Chitinase assay): The enzymes chitinase were estimated using the method suggested by (Boller and Mauch ,1988;Reissig et al.,1955). Suitable aliquot (0.5ml) after incubation was taken in to test tubes and 0.1 ml 0.12 M potassium borate buffer pH 8.9 was added. The tubes were kept in boiling water bath exactly for 3 min and cooled in tap water. Three ml DMAB (10 g DMAB was dissolved in 1000 ml glacial acetic acid (AR) which contains 12.5 % v/v 10 N HCl (AR). It was stored at 2 0 C as a stock prior to use before it was diluted with nine volume of glacial acetic acid.) was added in each tube and incubated at 38 0 C for 20 minutes. Tubes were cooled and absorbance was measured at 544 nm in spectrophotometer. Standard Nacetylglucosamine in the range of 0.05 to 0.30 µmole was prepared in borate buffer and was calibrated by following the above procedure.
β-1,3-glucanases
Chickpea cultivars grown in normal and sick plots showed significant difference in root β -1, 3-glucanase activity. The root tissues were obtained from sick plot contained lower β -1,3 glucanase activity as compared to the tissues received from normal plot (Fig.1). Cultivars differed significantly in their Β-1-3 glucanase activities. Among the cultivars, susceptible cultivars JG-62 and GG-4 showed significantly higher β -1, 3-glucanase activity as compared to the resistant cultivars (WR-315 and JCP-27) and tolerant cultivars (GG-1 and GG-2). However tolerant cultivars V 3 contained significantly lower level of enzyme activity in root tissue. (Fig.1) Among the different infectional stages, the β-1, 3glucanase activity increased from 86.55 to 225.84 µmole glucose release.h -1 .g -1 .fr.wt. with the advancement of disease and growth of plants. The enzyme activity drastically increased at infectional stage (S2) and it was more pronounced at post infectional stage (S3).
Data showed increasing trend of activity of the enzyme in root tissues from S 1 to S 3 stage, in general.
Plants grown in sick plot, the β -1, 3glucanase activity in the different cultivars were varied between 89.02 to 368.42 µ mole glucose released.h -1 .g -1 .fr.wt. Susceptible cultivar JG-62 and GG-4 had significantly higher activity, resistant and tolerant cultivars had lower activity as compared to susceptible cultivars grown in sick plot. Plants grown in normal plot, similar trend as recorded in sick plot. Healthy plant tissues from normal plot displayed significantly higher enzyme activity in JG-62 as compared to cultivars grown in sick plot. Cultivars contained almost similar values in enzyme activity recorded for resistant and tolerant cultivars grown in normal plots but wide variation in the enzyme activity was seen in sick plot. In general, plants grown in normal plot showed higher activity of enzyme as compared to the plants from sick plot and it was varied between 200.9 to 382.19 µ mole glucose released.h -1 .g -1 .fr.wt. Our data received for β-1, 3-glucase activity are in agreement with Naik et al.,(2005) who revealed that the increase in activity of β-1, 3-glucase in both susceptible and resistant lines against fusarium wilt.
Irrespective of plots (treatments), at preinfectional stage (S 1 ), resistant (WR-315 and JCP-27) and tolerant (GG-1 and GG-2) cultivars resulted significantly higher β-1, 3 glucanase activity than the susceptible cultivars JG-62 and GG-4. The tolerant cultivar GG-2 contained the significantly highest activity i.e 96.26 µmole glucose release.h -1 .g -1 .fr.wt. at pre infectional stage (S1) the activity was continued to rise in all the cultivars from pre infectional (S 1 ) to post infectional stage (S 3 ). However by this stage, the susceptible cultivars JG-62 and GG-4 showed significantly remarkably higher activity than the resistant and tolerant cultivars. In general, the trend of enzyme activity increased as progress of plant growth and development of tissues.
Interaction effect of TxVxS of β-1, 3glucanase activity revealed significant difference in root tissues. (Fig. 2). Plant grown in sick plot resulted significant change in response to disease infection in root tissues of all the six cultivars. Susceptible cultivars JG-62 and GG-4 showed the highest β -1-3 glucanase activity at all stages of infection in tolerant and resistant cultivars. At infectional stages, all the cultivars had remarkably higher activity. Susceptible cultivars JG-62 and GG-4 visualized appreciable change in β -1-3 glucanase from infectional (S 2 ) to post infectional stage (S 3 ).
The activity of enzyme was declined in resistant and tolerant cultivars at post infectional stage except in JCP-27, where the activity was increased. Changes among the severity of diseases development and activity correlate at this stage showed significant cultivars differences in root tissues except in JCP-27.
Susceptible cultivar had the significantly highest β -1,3-glucanase activity as compared to tolerant cultivars and, resistant cultivars (WR-315 and JCP-27) at infectional stage (S 2 ) with the advancement of diseases. At post infectional stage (S 3 ) JG-62 and GG-4 had the significantly highest β -1-3 glucanase activity as compared to other cultivars grown in sick plots.
In case of plants from normal plot, cultivars grown in healthy soil increasing trend of β -1-3 glucanase activity as progress or growth of the plants from S 1 to S 3 . The activity drastically increased from S 1 to S 3 in all the cultivars. However, differences were greater in susceptible cultivars JG-62 and GG-4. Overall data recorded for β-1, 3-glucanase activity are supported by Thangavelu et al., (2003), Saika et al., (2005). Ramamoorthy et al.,(2002) revealed that the activity of beta -1,3 glucanase and chitinase were induced to accumulate at higher levels at 3-5 days of challenge inoculation in bacterized plants of banana with Pseudomonas fluorescens isolate PF1.
Chitinase activity in chickpea plants either grown in sick plot or normal plot did not show any significant change. In general plants grown in sick plot had little higher activity in all the cultivars as compared to the plants grown in normal plot. At pre infectional stage all the cultivars did not show any significant differences in their activity. The enzyme activity drastically declined at infectional stage (S 2 ). However the reduction in chitinase activity was less in susceptible cultivars. At post infectional stage (S 3 ), again increased in all the cultivars. Saikia et al., (2005) revealed that the maximum activity of chitinase was recorded after three days of inoculation in all induced plant of chickpea. Thereafter, the activity decreased progressively. Two chitinases detected in induced chickpea plants infected with Fusarium oxysporum f. sp. ciceris.
Interaction effects of TxVxS did not revealed any significant differences in chitinase activity (Fig.4). Irrespective of plant grown in sick plot and normal plot, susceptible cultivars JG-62 and GG-4 had higher chitinase activity as compared to resistant cultivars at all the stages. The enzyme activity at infectional stage (S2) though the plants from normal plot had little higher value.
In general, chitinase activity in root tissues at infectional stage in sick plot (diseased plant) and normal plot (healthy plant) are in agreement with the published literature. In some plant species, resistant tissues accumulate chitinase more rapidly and at higher concentration than susceptible tissues (Benhamou et al.,1990;Hedrick et al.,1988;Irving and Kuc, 1990;Joosten et al., 1990;Rasumussen et al., 1992;Samac et al., 1990;Wyatt et al., 1990). In many of these tissues the resistance response was initially hypersensitive reaction with very rapid localized cell death (Hahlbrock et al., 1989Vogeli et al., 1988Voisey and Slusarenko, 1989).
Overall data reported for chitinase activity are in agreement with the findings suggested by Shukla (2001), Shukla and Suthar (2017). He examined chitinase activity in root tissues of resistant and susceptible cultivars of chickpea at different stages of infection in inoculated and uninoculated pot experiments. The results obtained in the field experiments are in agreement with data obtained by Cachinero et al., (2002) studied on plant defense reactions against fusarium wilt in chickpea induced by incompatible race 0 of Fusarium oxysporum f.sp. ciceri and nonhost isolates of F. oxysporum. This defense-related response was induced more consistently and intensely by non-host isolates of F. oxysporum than by incompatible FOC race 0 and the antifungal hydrolases, Chitinase enzyme in healthy plats may be involved in elicitation reaction to activate plant defense mechanism. In case of disease infection, chitinase may also accumulate in response to fungal elicitors and participate in response to the elicitors and may take part in defense reaction by preventing further development of fungal pathogen. (Cachinero et al, 2002, Saika et al., 2005. Chitinase and β 1-3, glucanase have been examined in present study but other enzymes capable of degrading hyphal cell wall are known to be present in higher plants. The possibility exists that host polysaccharides operating under specific conditions could provide an explanation for the lysis of vascular pathogens. The enzyme can either induce by infection with pathogens or treatment with the elicitors/chemicals (Bowles, 1990). β -1,3-glucanase is another hydrolytic enzyme involved in degradation of fungal cell wall. The major products formed due to hydrolysis are oligomer β -1,3-glucans. There are large numbers of factors responsible for accumulation of this enzyme and lack of high degree of pathogen specificity in their induction implies that these are part of a general response of the plant stress. However their induction has correlated with greater resistance to subsequent pathogen attacks. Pattern of rising β -1,3-glucanase from infectional to post infectional stage same was true for healthy plants also with growth of the plants. β -1,3-glucanase is actively associated with during infection process as a part of disease resistance mechanism. As the fungi could not progress further the high level of activity was no longer persist in root tissues of resistant plants grown in sick plot. In infected plants there is continuous progress of fungi hence the high level of β -1,3-glucanase is always beneficial as a part of defense reaction to hydrolyse the fungal cell wall. The results of present investigation are supported to the findings of Naik et al., (2005); Thangavelu et al., (2003);Saika et al., (2005) and Ramamoorthy et al., (2002), (Rathod, 2008).
In conclusions, the results obtained in the field experiments are in agreement with previous findings and Interaction effect of TxVxS of β-1, 3-glucanase and chitinase activity revealed significant difference in root tissues infected with Fusarium oxysporum f.sp. ciceri grown in sick plot as well as in diseased tissues of all the six cultivars. It exhibits defensive reaction in susceptible and diseased plots and cultivars grown in healthy plot showed resistant and higher activities of the both enzymes under experimental condition thus its prove hypothesis of experimental design. It can be concluded that both β-1, 3-glucanase and chitinase activity having role in disease resistance and it can be biochemical marker for identification of fungal resistant cultivars.
Conflict of interest
There is no conflict of interest, it is part of Ph.D research of junagadh Agricultural University thesis submitted and published some parts in some journals and books too.
References
Boller T and Mauch, F (1988 (1990). Subcelluar localization of chitinase and of its potential substrate in tomato root tissues | 2020-07-30T02:06:27.972Z | 2020-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "55acb3f79ae0afd2a77bd330515d5a5a4a6f5494",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-6-2020/P.%20J.%20Rathod%20and%20D.%20N.%20Vakharia.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7928d542040fd602ed94f01f18a4269d4b198a64",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
253001924 | pes2o/s2orc | v3-fos-license | Mediating Effects of Discipline Approaches on the Relationship between Parental Mental Health and Adolescent Antisocial Behaviours: Retrospective Study of a Multisystemic Therapy Intervention
Poor parental mental health is one of the risk factors for child emotional and behavioural problems because it reduces caregiver’s ability to provide appropriate care for their child. This study aimed to measure changes in parenting factors and adolescent behaviours after Multisystemic Therapy (MST), and to explore the mediating role of discipline approaches on the relationship between parental mental health and adolescent behavioural problems. This retrospective study extracted data collected from 193 families engaged with the MST research program during 2014–2019. Data was collected at different time points (pre-treatment, post-treatment, 6- and 12-months follow-up). Statistically significant changes were found in adolescent behaviours and parenting factors following the MST intervention and these positive changes were maintained over the following 12 months. Results of the parallel multiple mediator model analysis confirmed mediating effects of discipline approaches on the relationship between parental mental health and adolescent’s behavioural problems. The findings suggested that parental mental well-being significantly contributes to effectiveness of parenting, which resulted in positive changes in adolescent’s behavioural problems. It is recommended caregiver’s parental skills and any mental health issues are addressed during the intervention to enhance positive outcomes in adolescent behaviour.
Introduction
Oppositional defiant disorder (ODD) and conduct disorder (CD) are one of the most common mental and behavioral problems in children [1]. A meta-analysis conducted in 2015 by Polanczyk et al. [2] reported that disruptive disorders was the second highest prevalence of mental disorders in children and adolescent at 5.7% (the highest prevalence was anxiety disorder at 6.5%). A report from the Mental Health of Australian Children and Adolescents Survey [3] indicated that approximately 8% of all Australian children and adolescents met diagnostic criteria for oppositional defiant disorder or conduct disorder. In addition, almost half of these children and adolescents reportedly had co-occurring mental disorders, e.g., ADHD and mood disorders. Children with oppositional problems are negativistic, hostile and defiant, and if untreated, they often develop a conduct disorder exhibiting a range of delinquent behaviours including bullying, physical fights, deliberately destroying other's property, breaking into properties or cars, staying out late at night without permission, substance use, and absconding from home and school. Without effective intervention, conduct disorder is a reliable predictor of various adult mental illness, substance abuse, chronic unemployment, domestic violence and incarceration [4]. The finding of a systemic review on longitudinal studies investigating childhood factors contributing to Int. J. Environ. Res. Public Health 2022, 19, 13418 3 of 15 to effectiveness of parenting interventions for children with severe conduct disorder. They suggested that although the positive change in parenting skills predicts an improvement in children outcomes, these treatment outcomes might be varied depending on parent's ability to actively implement these acquired skills [27,28]. Parents who experience mental health issues, e.g., depression or anxiety might find it difficult to effectively implement these skills when distressed. Parental psychological well-being could be improved by adequate family, school and community supports which as a result are likely to promote parental competence. This is even more significant in families with disadvantaged backgrounds and/or minority groups [29][30][31][32]. More recent research has examined the mechanisms that contribute to the effectiveness of interventions for children and adolescent with conduct problems [13,14,28,33,34]. It suggested many parenting interventions that reported poor outcomes in disadvantaged families, often failed to encourage on-going engagement with families and service flexibility to overcome access barriers. The result from the metaanalysis examining the effectiveness of preventative interventions and treatments for youth antisocial behaviour, suggested that treatment approaches that actively engaged parents in the interventions such as a parent support group, a child-centered learning approaches and behavioural parenting training were associated with larger effects, and therefore were recommended when selecting effective interventions for youth with conduct disorders [35].
Multisystemic Therapy (MST) is an intensive family and community-based treatment targeting antisocial behaviours in adolescents (aged 11-16 years). The intervention is developed from the theory of social ecology introduced by Bronfenbrenner in 1979 [7] focusing on understanding multi-determined human behaviours by learning a complex interaction between individuals and various contextual influences within their life [7,36]. MST emphasizes the need to identify possible contributors to child's behavioural problems both within systems and between systems in which the child is embedded. The probability of reducing antisocial behaviours will be increased by addressing these identified risk factors. MST intervention utilises a variety of evidence-based therapeutic treatments including behavioural parenting training (BPT), cognitive behaviour therapy, and structural family therapy, whilst employing family systems theory [37] and social ecological theories of behaviour [7]. MST is an intensive intervention, with a therapist having an average of 3 sessions per week in the family home for the duration of 4-5 months. A therapist has concurrent caseload of only 4 to 6 families; however, is available 24/7 to support parents in times of family disruption and distress during the intervention. The goals of the therapy are discussed and established at the early stage of the intervention by a therapist and the family members. Common family goals are reducing aggression, violence, and non-compliance in the home and community; improving school attendance and behaviour; and ceasing substance use and anti-social peer involvement. During MST treatment, parents work with a therapist to improve family functioning and their parenting skills such as monitoring skills, communication skills, problems solving skills, and emotional regulation. Therapists also liaise with schools and other services in the community to provide an on-going support for families if needed. Many research studies (including randomised controlled trials, case-control studies, cohort studies and benchmarking studies) have been conducted internationally by both MST model developers and independent researchers [38], and demonstrated that when the treatment model was implemented correctly (i.e., with high levels of prescribed treatment fidelity), the effectiveness of the intervention was high [39,40]. The findings from MST research indicate the parenting discipline approaches were found to be a key mediator of change for MST with adolescent conduct disorders [33,41]. An improvement in parental sense of competence during MST intervention contributes to positive changes in parental discipline, which result in improved parent-child relationship and decreased youth antisocial behaviour [42]. A multilevel meta-analysis that examined the impact of MST on youth with delinquency found significant treatment effects on delinquency, substance use, recidivism, family functioning and psychopathological symptoms. Although the finding noted that MST was most effective with delinquent youths under the age of 15, it suggested that an improvement of treatment for older youths may be achieved by focusing more on protective and risks factors in the peer group and school environment [43].
The majority of families referred to the Western Australian CAMHS MST service are socio-economically disadvantaged and experience a wide range of complex and challenging issues. These families have often experienced failed therapeutic interventions or had minimal positive contact with mental health and other social support services in the past. These families include many Australian Aboriginal families, and ethnoculturally and linguistically diverse (ELD) families. Over the past ten years, the WA CAMHS MST program has successfully engaged these at-risk populations and assisted them by reengaging young people in educational/vocational settings, reducing youth homelessness, reducing or ceasing drug and alcohol use, and also preventing further involvement with the Police and Justice Departments. A critical initial goal of helping these populations is developing a strong working alliance with parents and other caregivers [44]. The program excels in achieving this critical initial stage of the intervention by working with families in their homes and communities. The local study conducted within the Westertn Australian Child and Adolescent Mental Health Service (CAMHS) [45], found favourable and enduring outcomes for most families completing the MST intervention.
Previous studies indicated that interventions which address risk and protective factors using systemic approaches were more likely to prove effective for children and adolescents with disruptive behaviours. Nevertheless, the understanding of parental factors as moderators of the success of interventions still requires further exploration. Therefore, the aims of this retrospective study were to examine changes in adolescent behaviours, and parental factors such as parental mental health, discipline approaches and monitoring skill following the MST intervention. Secondly, we aimed to determine the mediating role that parental discipline approaches and monitoring skill have in the relationship between parental mental health and adolescent behavioural problems. The mediating effects were tested at post-treatment because we sought to examine how parent's depression, anxiety and stress level could affect parent's ability to implement acquired parenting skills immediately after the intervention. We hypothesised that (1) adolescent behavioural problems, parental depression, anxiety and stress, parental discipline approaches and monitoring skill would improve after the treatment and be sustained at the 12-month follow-up. (2) at post-treatment caregivers who presented with lower level of anxiety, stress and depression would be more likely to report higher level of monitoring skills and lower level of authoritarianism (described as hostile and low warmth), or permissiveness in their parenting approach. Consequently, these improved parenting skills would contribute to a decrease in adolescent behavioural problems. We hope information gained from this study is used to inform programs and practitioners working with disadvantage families, in order to understand the mechanisms that may impact the effectiveness of interventions.
Participants and Procedure
This retrospective study extracted the data collected from 193 families engaged with the MST research program during 2014-2019. Families were assured their decision to participate in the research was voluntary, and that they could withdraw at any time. Once families agreed to participate in the research, written informed consent was obtained from caregivers. Then, caregivers were contacted by research staff to schedule a face-to-face interview. Instruments used for the data collection contained face-to-face interviews and questionnaires which were collected at baseline, post-treatment, and 6 & 12 month follow up. The data collection was approved by the Department of Health, Human Research Ethics Committee (DoH, HREC), Western Australia.
Child Behaviour Checklist (CBCL)
CBCL assesses child behaviours and competencies in the context of psychopathology, and in this study the parent-reported version was administered to monitor changes in children's behaviours over time. Caregivers rated childhood internalising behaviours (e.g., anxious/depressed, withdrawn, somatic complaint), externalising behaviours (e.g., rule-breaking behaviour and aggressive behaviour), social problems, thought problems, attention problems and other behavioural problems. It consists of 113 items scored on a 3-point Likert scale: not at all (0), somewhat true (1) and very true (2). The scale has high psychometric properties with internal reliability (Chronbach's α) of 0.97 for total empirically based problem scales and the alphas of each subscales ranging from 0.79 to 0.97 [46].
Depression, Anxiety and Stress Scale-21 (DASS-21)
DASS-21 was a self-report scale completed by caregivers to measures their negative emotional states of depression, anxiety and stress. DASS-21 used for the purpose of this research is an abbreviated version with three subscales (depression, anxiety and stress) of 7 items each. The internal reliability for the standardised 7-item scales is 0.81 for depression, 0.73 for anxiety and 0.81 for stress [47]. The total score for each subscale is determined by combining the scores of the 7 corresponding items and multiplying it by 2. An increase in subscale score(s) over a period of time indicates deterioration in the caregiver's mental health. These scores were also used to determine the level of severity of the caregiver's depression, anxiety and stress. Originally, the level of severity is categorised into normal, mild, moderate, severe or extremely severe. However, for the purpose of this study researcher re-categorised them into two subgroups: non-clinical range (i.e., normal and mild) and clinical range (i.e., moderate, severe and extremely severe).
Parenting Styles and Dimensions Questionnaire (PSDQ)
PSDQ was reported by caregivers [48] and used to measure parenting discipline approaches along a continuum of Baumrind's Typology of authoritative, authoritarian, and permissive parenting styles [49]. The PSDQ contains 32 statements describing different caregiver's responses to a child's behaviour. It has a 5-point scale ranging from 'never' to 'always' to rate the frequency of certain discipline approaches and responses used by the caregiver. The statements cover three dimensions of authoritative approach (connection, regulation and autonomy) with internal reliability of 0.86, three dimensions of authoritarian approach (physical coercion, verbal hostility and non-reasoning/punitive) with internal reliability of 0.82 and one dimension of permissiveness (indulgence) with internal reliability of 0.64. For the purpose of this study, only the scores of authoritarian and permissive discipline approaches were observed. The decreased scores of authoritarian and permissive parenting approaches over a period of time indicate a reduction in a caregiver's negative discipline approach.
Parental Monitoring Scale
Parental Monitoring scale was adapted from an existing scale developed by Stattin and Kerr [50], which includes 8 questions using 5-point Likert scales ranging from never to always. This self-report scale asks caregiver about knowledge of their child's whereabouts, activities, and associations (e.g., "How often do you know: what your child is doing during their free time? with whom your child is spending their free time? what your child spends their money on?"). The internal reliability for this adapted 8-item parental monitoring scale was 0.88. The increased score of parental monitoring over period of time indicate improvement in caregiver's monitoring skill.
Data Analytic Strategy
Extracted data were analysed using statistic software SPSS for Window version 24 (IBM Corp., Armonk, NY, USA). Socio-demographic data were analysed using descriptive statistics for continuous numerical variables and absolute and relative frequencies for nominal qualitative variables. Due to some missing scores in follow-up data, multiple imputation was performed as a method for handling missing data as recommended by Van Ginkel et al. [51]. They suggested that multiple imputation was an optimal method providing a solution for problems that commonly found in those traditional methods of handling missing data (i.e., listwise deletion, pairwise deletion, and (single) imputation). The problems such as wastefulness, computational problems, biased (co)variances, and biased p values and confidence intervals could be addressed using the statistical model that accurately describes the data and its random error component in order to create several plausible complete versions of the incomplete data sets. Multiple different outcomes are produced as a result of using different version of complete data sets in statistical analyses and these outcomes are combined into an overall statistical analysis in which the standard errors and significance tests were employed.
To test the hypotheses, we have broken down the analyses into three stages. For the first hypothesis, the preliminary analyses were performed as a stage one to investigate the number of adolescents with improved behaviours at post-treatment and follow-ups and to examine the number of parents reporting the clinical range in depression, anxiety and stress at different time points. To determine range of change in adolescent behaviours (CBCL) from baseline, the value of ±0.5 of one standard deviation was used as an index of significant change as recommended by Key Performance Indicators for Australian Public Mental Health Services [52]. Adolescents with follow-up scores increased from baseline more than 0.5 SD was considered as 'deteriorate', maintained within ±0.5 SD as 'no change', and scores that decreased more than −0.5 SD as 'improvement'.
For stage two, the long-term outcomes were investigated by using one-way repeated measures ANOVA. These analyses were applied to investigate the change of adolescent and parental outcomes at different time points. At the beginning of analysis, an assumption testing for normality, homogeneity of variance and sphericity was conducted. The severity of departures from sphericity in one-way repeated-measures ANOVA was assessed by using Mauchly's test. If a statistical significance in Mauchly's test was detected, it indicated that there was a significant difference between the variances and the assumption of sphericity was violated for the main effects [53]. As a result, the obtained F-ratio was evaluated using new degree of freedom, which are calculated using the less conservative correction called Huynh-Feldt Epsilon [54,55].
The scores at baseline were compared with the scores at post-treatment/follow-ups. A significant difference existing between baseline and post-treatment/follow-ups demonstrated changes in adolescent behavioral problems, caregiver's mental health, parental discipline approaches and monitoring skill. A Partial Eta Squared (η 2 p ) is a measure of effect size from the main ANOVA which could be obtained from SPSS output (as reported in Tests of within-Subjects Effects table). However, an effect size (r) for a pair comparison should also be reported in addition to the main ANOVA as recommended by Field [54]. Therefore, we also calculated an effect size for its' contrasts by which F-values were converted to r. An equation used for calculating is as follows: Cohen [56] reported the following intervals for r: 0.1 to 0.3 as small effect; 0.3 to 0.5 as intermediate effect; 0.5 and higher as strong effect.
For the second hypothesis, the mediating effects of parenting discipline approaches and monitoring skill on the relationship between parental mental health and adolescent behavioural problems were examined as the third stage of analyses. The parallel multiple mediator model was performed using PROCESS V3.4 Macro for SPSS developed by Andrew F. Hayes [57]. Firstly, the correlation analysis was performed to examine the inter-correlation between all variables. Then, we tested the hypothesis that at the posttreatment the relationship between parental mental health (i.e., depression (X 1 ), anxiety (X 2 ) and stress (X 3 )) and adolescent behavioural problems (Y) would be mediated by authoritarian approach (M 1 ), permissiveness (M 2 ) and monitoring skill (M 3 ). The scores from post-treatment were used in these analyses because researchers aimed to examine the mediating effects after the families had received interventions from MST. The aim was to determine how parent's depression, anxiety and stress level could affect parent's ability to implement acquired parenting skills immediately after the intervention, which in turn possibly resulted in varied adolescent behavioural outcomes. Figure 1 depicts a process in which the independent variables led to the mediators and the mediators then led to the dependent variable. With k = 3 mediators, four equations are needed: For the second hypothesis, the mediating effects of parenting discipline approaches and monitoring skill on the relationship between parental mental health and adolescent behavioural problems were examined as the third stage of analyses. The parallel multiple mediator model was performed using PROCESS V3.4 Macro for SPSS developed by Andrew F. Hayes [57]. Firstly, the correlation analysis was performed to examine the inter-correlation between all variables. Then, we tested the hypothesis that at the posttreatment the relationship between parental mental health (i.e., depression (X1), anxiety (X2) and stress (X3)) and adolescent behavioural problems (Y) would be mediated by authoritarian approach (M1), permissiveness (M2) and monitoring skill (M3). The scores from post-treatment were used in these analyses because researchers aimed to examine the mediating effects after the families had received interventions from MST. The aim was to determine how parent's depression, anxiety and stress level could affect parent's ability to implement acquired parenting skills immediately after the intervention, which in turn possibly resulted in varied adolescent behavioural outcomes. Figure 1 depicts a process in which the independent variables led to the mediators and the mediators then led to the dependent variable. With k = 3 mediators, four equations are needed:
Descriptive Statistic
A total of n = 193 families were included in the analysis, and 73% (n = 141) of adolescents were male. The mean age of adolescents was 13.7 years (SD = 1.40, range 11-16 years). Majority of adolescents were identified as Caucasian (85%), 8% as ethnoculturally and linguistically diverse (ELD) and 7% as Australian Aboriginal. Around half of these adolescents (51%) lived with a single caregiver, 23% with an intact family, 20% with a blended family, and 6% lived with caregivers who were not biological parents (e.g., foster parents, grandparents or relatives). Around half of caregivers had a high school education or lower (53%), and 53% of families had an annual income not included welfare benefits < A$50,000 per annum. Around half of these adolescents had used illicit
Descriptive Statistic
A total of n = 193 families were included in the analysis, and 73% (n = 141) of adolescents were male. The mean age of adolescents was 13.7 years (SD = 1.40, range 11-16 years). Majority of adolescents were identified as Caucasian (85%), 8% as ethnoculturally and linguistically diverse (ELD) and 7% as Australian Aboriginal. Around half of these adolescents (51%) lived with a single caregiver, 23% with an intact family, 20% with a blended family, and 6% lived with caregivers who were not biological parents (e.g., foster parents, grandparents or relatives). Around half of caregivers had a high school education or lower (53%), and 53% of families had an annual income not included welfare benefits < A$50,000 per annum. Around half of these adolescents had used illicit drugs or alcohol at least once in the previous 6 months. 90% (n = 174) of parents who participated in the research were female which included biological mothers, stepmothers, foster mothers and grandmothers.
Preliminary Finding
The result from preliminary analyses using an index of significant change demonstrates that 80% of adolescents exhibit an improvement in total behaviours at the post-treatment and at following 6 and 12 months after the MST intervention ( Table 1). The result from parental DASS (Figure 2) indicates that at baseline around half of caregivers reported their stress, anxiety and depression in the clinical range. However, these numbers decrease after the MST intervention, and continue to decrease at 6-and 12-month follow-up.
Preliminary Finding
The result from preliminary analyses using an index of significant change demonstrates that 80% of adolescents exhibit an improvement in total behaviours at the post-treatment and at following 6 and 12 months after the MST intervention ( Table 1). The result from parental DASS (Figure 2) indicates that at baseline around half of caregivers reported their stress, anxiety and depression in the clinical range. However, these numbers decrease after the MST intervention, and continue to decrease at 6-and 12-month follow-up.
Long-Term Outcomes Finding
The long-term outcomes in adolescents and caregivers were examined using the repeated measure ANOVA. The results from the repeated measure ANOVA (Table 2)
Long-Term Outcomes Finding
The long-term outcomes in adolescents and caregivers were examined using the repeated measure ANOVA. The results from the repeated measure ANOVA (Table 2) A series of pair-wise comparisons demonstrated that there were statistically significant differences between baseline scores and follow-up scores with medium to large effect sizes found between baselines vs. post-treatment/follow-ups in most measures. The results confirm enduring positive adolescent and parental outcome scores at post-treatment and at follow-up times.
Mediating Effect Findings
The inter-correlation between each of the parental factors (i.e., parental depression, anxiety, stress, authoritarian, permissiveness and monitoring) and adolescent behavioural problems at post-treatment were detected and found to be all statistically significant. The parallel multiple mediator models (Table 3 and Figure 3) illustrated the total (c), direct (c') and indirect (a i b i ) effects of parental mental health on adolescent behavioural problems with parental approaches and monitoring skill as mediating variables. The indirect effects of parental anxiety on adolescent behavioural problems through mediating variables were estimated (a i b i ) as follows: authoritarian = 0.039, permissiveness = 0.374 and monitoring skill = 0.155. The indirect effects of stress on adolescent behavioural problems through mediating variables were estimated as follows: authoritarian = 0.005, permissiveness = 0.259 and monitoring skill = 0.157. The indirect effects of depression on adolescent behavioural problems through mediating variables were estimated as follows: authoritarian = 0.059, permissiveness = 0.240 and monitoring skill = 0.186. Around a third of the variance in adolescent behavioural problems was accounted for by proposed mediators (i.e., discipline approaches and monitoring skill) and parental stress, anxiety and depression. The indirect effect pathways indicated there were significant associations found between parental mental health (i.e., anxiety, stress and depression) and parental authoritarian, permissiveness and monitoring skill (path a 1, a 2 , a 3 ). Parental authoritarian approach was predicted by anxiety (R 2 = 0.21), stress (R 2 = 0.16) and depression (R 2 = 0.12). Parental permissiveness was predicted by anxiety (R 2 = 0.15), stress (R 2 = 0.12) and depression (R 2 = 0.15). Parental monitoring skill was slightly and negatively predicted by anxiety (R 2 = 0.02), stress (R 2 = 0.03) and Depression (R 2 = 0.06). Adolescent behavioural problems were found to be positively predicted by parental permissiveness whereas nega-tively predicted by monitoring skill. Parental authoritarian was slightly associated with adolescent behavioral problems; however, it was not statistically significant.
Discussion
The first aim of this study was to observe the changes in adolescent and parental outcomes after the MST intervention. The results of the preliminary and longitudinal data analysis supported the first hypothesis which indicated that the majority of adolescents referred to MST exhibited positive changes in their emotional and behavioural problems post-treatment, and these changes were sustained over the following 12 months period. The results also indicated the majority of caregivers reported significant and enduring improvement in their mental health, parenting and monitoring skills after their involvement with the MST intervention. This retrospective study indicated that Multisystemic Therapy, had an enduring positive impact on adolescents and their families. The desired outcomes of the treatment were achieved by increasing caregiver capacity to implement effective parenting skills with the aim of successfully eliciting positive behaviours in their child.
The second aim of this study was to determine the mediating role that parental discipline approaches and monitoring skill have in the association between parental mental health and adolescent behavioural problems. The results from the mediation analysis confirmed both direct and indirect effects between parental mental health and adolescent behavioural problems as indicated in the second hypothesis. The indirect associations demonstrated that parental anxiety, stress and depression predicted authoritarian approach, permissiveness and adversely predicted parental monitoring skill. Subsequently, parental authoritarian approach, permissiveness and monitoring skill were found to associate with adolescent behavioural problems. The results suggested that caregivers who reported high level of depression, anxiety or stress were more likely to report high level of authoritarian or permissive approach, and poor parental monitoring skill, which as a result contributed to more problem behaviours in adolescents. It is also worth noting that when comparing all three mediating variables, parental permissiveness was deemed to be the strongest predictor of adolescent behavioural problems.
These findings are consistent with other studies [11,21], suggesting that caregivers with poor mental health were more likely to use negative parenting styles (e.g., physical punishment, verbal hostility and/or avoidance), compared to caregivers with better mental health. Therefore, the intervention that improves caregiver's mental well-being would likely enhance their parenting skills, which in turn would promote adolescent positive outcomes. As recommended by previous research, obtaining positive changes in caregiver's parenting skills and mental health are strong indicators for positive treatment outcomes [58][59][60]. This supports the notion that for practitioners to provide an effective intervention for parents supporting their adolescents with emotional and behavioural difficulties, parental mental health issues should also be addressed. Our findings from the mediation analyses suggest that parents who reported low level of depression, anxiety and stress at post-treatment are more likely to effectively implement positive parenting skills acquired from the treatment which improve their child's behaviours and general functioning.
The MST intervention places a focus on teaching caregivers improved communication skills, and effective techniques to manage anti-social behaviours and elicit pro-social behaviours in their children. Therefore, the caregiver's ability to implement these skills needs to be evaluated and discussed throughout the intervention. When mental health is found to be a barrier for caregivers to be consistent with their parenting; it is important that the clinician address this issue. The therapeutic relationship between the clinician and caregivers increases positive engagement with the program and encourages caregivers to seek on-going support to improve their own mental well-being. The outcome from this study confirms previous studies findings that with the right combination of family and social support, caregivers with mental health issues can improve their own mental well-being, learn to parent well and enrich relationships with their children [1,20,21].
There are some methodological limitations that must be taken into consideration when interpreting the results of this study. Without a control or comparison group, it is difficult to exclude the possible confounding impact of natural variations over time. The results also indicated that only around a third of the variance was accounted for by proposed mediators and parental metal health factors. Given that this is a retrospective study, researchers had limited information on other risk factors such as historical family trauma, domestic violence, individual learning disability and/or cognitive impairment, etc. Therefore, this study did not have an opportunity to explore these factors and how they are correlated. This suggested that other confounding factors contributing to adolescent behavioural problems should be further investigated. Despite this limitation, this study provides substantial evidence indicating caregiver mental well-being and positive parenting discipline approaches influence positive outcomes with adolescents. Previous researchers who have examined the evidence-base for MST [61,62] noted that with many existing randomised controlled trials demonstrating the effectiveness of the MST model (Multisystemic Therapy: Research at a glance, 2022) [38] further research should focus on elucidating the underlying mechanisms of MST effectiveness. Understanding the enhancing factors of intervention effectiveness, is important for planning policy and clinical guidelines [1].
Another limitation of this study is all the instruments used for this study were based on parental ratings, either for the adolescent (CBCL) or parent (DASS21, PSDQ and parental monitoring scale). Inclusion of multi-informant measures, e.g., child self-reported and teacher-reported would provide more perspectives for a comprehensive examination. A longer follow-up period is also recommended to confirm current results. Further evaluation may include a descriptive analysis of family historical and environmental factors, analysis of comparison groups including cost-benefit analyses, and examination of other confounding factors that potentially contribute to the successful implementation of the MST intervention.
Conclusions
Effective intervention with high-risk youth having major behavioural issues has the potential to positively alter the life-trajectory of these young individuals, and avoid predictable negative outcomes including chronic adult unemployment, patterns of inter-personal aggression, family and domestic violence, various mental illness, substance abuse, anti-social and criminal behaviour, probable periods of youth and adult incarceration, and premature death. Effective parent interventions within these families typically involves teaching parents and caregivers improved communication and problem-solving skills designed for generalisation and possible use with any other children having chronic behavioural difficulties. The MST intervention therefore has potential for powerful and enduring positive social influence, resulting in significant cost-saving potential for the wider community across numerous domains of influence mentioned. | 2022-10-20T16:01:13.624Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "6a0ba2575db48146fcb19db9e559a0fe599423a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/20/13418/pdf?version=1666014558",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f73de5bbd1227d11ac20b5a0e387b5e5b880323",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55090771 | pes2o/s2orc | v3-fos-license | Analyzing Lifestyle and Consumption Pattern of Hire Groups under Product Service Systems in Taiwan
This study explores the characteristics of rental goods, integrates the green concept in the design and development, and introduces the concept of product service into the rental consumption trend in Taiwan. This study takes the questionnaire survey to collect various opinions of the consumers to rental consumption and also classifies the Taiwanese consumers into five clusters based on the life styles, and the names of clusters are simple financial management cluster, environment and taste cluster, fashionable and flexible cluster, careful purchase cluster, and smart consumption cluster. Finally, conclusions are as follows. (1) The green consumption cognition and attitude of the consumers to the environmental goods can help to master the factor of green consumption property for developing the rental commodities. (2) The market segmentation of the rental consumption market can be enhanced by the variables of available life styles. (3) The applications with product service rental characteristics should take the opinion feedback of the consumers into the sustainable product development conditions and expand the service property of the product. (4) As the cost of cradle-to-cradle recycling pattern is high, the support and promotion of the government can help to construct the business model of product service rental consumption and develop the rental economy.
Introduction
Human plunder and the destruction of nature are closely related to consumption patterns.The purpose of consumption is to satisfy needs; however, in a society of capitalism, in order to sell more products, capitalists stimulate need through the use of various kinds of marketing measures that result in unnecessary consumption [1][2][3].With the rise of environmental consciousness, green consumption has been proposed in response to unsustainable crisis in economic development.According to Marx, consumption is not only the end but also the start of production.Consumption both fulfills and enhances production.It also influences exchange and distribution [4][5][6][7][8].Critical positions of consumption suggest that, in order to avoid the crisis of unsustainable development, human beings must change the current consumption model, which is destroying the environment [9,10].Leasing changes the consumption habit of manufacturers and consumers.Purchase has changed into leasing.The sold products are changed as services [11,12].Consumers benefit from functions of products, but the ownership still belongs to manufacturers.Thus, manufacturers not only satisfy customers' needs of product functions, but also reduce product output and sales by services provided [13][14][15].It will decrease resource consumption and pollutant output and control the total volume.
According to WRAP of the UK, in 2009, every year in the UK 143 billion GBP of usable goods are disposed.Using clothing as an example, the current average utilization rate is only 66%.If goods can be fully used until the end of product life, it will save 47 billion GBP for consumers every year.According to the data, with environmental protection, by means of rental, hire groups are becoming the new green consumption group [11,13,14].
When consumption trends change from buying-selling consumption to that of rental consumption, rental behavior will enhance the circular consumption.It will not only save their product resources but also their social public resources.Leases can be divided into capital leases and operating leases.Capital leases are financial leases.The lessees authorize the purchase of new machines and equipment to the lessors and then rent the equipment from the lessors.All depreciation is paid by the lessees' accounting in different periods.During the period of the lease, the lessees have the usage right of the goods [11,15].
Thus, the risk should be particularly evaluated and guaranteed.An operating lease refers to a noncapital lease.With an operating lease, the lessors (leasing firms) have the ownership of the leasing subjects (machines and equipment), while the lessees (enterprises) have the usage right of the leasing subjects.Once the lease term expires, ownership of the leasing subjects remains with the lessors.During the rental period, the lessors must undertake the expenditures of renewal, maintenance, and prevention of leasing subjects [11,12].
Rentals change the consumption patterns of manufacturers and consumers.As purchase has changed into rental service, the original sold product is changed into a service, where consumers can benefit from product functions while ownership belongs to the manufacturer.The manufacturer can meet customer demands of product functions, and reduce product output and sales by providing a service.In this way, the resource consumption and pollutant output can be decreased to achieve the effect of total amount control [11][12][13][14][15].
In modern society, people are concerned about enjoyment and have developed new lifestyles using the rental concept to enjoy lives that are limited by money.New lifestyle groups known as hire groups have thus been constructed.Hire groups enjoy renting items, and they only care about possessing the items for a period of time instead of for the life of the product.With limited cash, consumers can experience unlimited rentals [16][17][18].Leasing firms sell services and consumers spend money to satisfy temporary needs instead of receiving ownership.Some products are used rarely during the year.For these products, customers can pay a small amount of money for usage rights that last a few days.They do not need to pay a large amount of money [6][7][8].In addition, the same products can be repeatedly used and rented.Thus, rentals are not only good for finance; they also decrease the waste of resources and help protect the environment [1,[13][14][15].
Such a model of meeting the "environmental protection" demand by "selling service" is in line with the concept of "PSS (product service system)." "PSS" is based on environmental protection and economic considerations [19][20][21][22].It combines product and service to satisfy the consumption demand in order to realize the purpose of dematerialization of the product [23][24][25][26].The system is usually operated by rentals, shared use, or pricing by unit of use.Consumers purchase the product "service, " as provided by this system, rather than the product "substance" [27][28][29][30][31]. Hence, this study introduces the product service concept into the rental development trends of Taiwan, conducts a survey on consumer preferences of rental commodities, and plans to create "Taiwan's environmentally friendly rental life." It is expected to develop products inline with consumer demands and implement the sustainable development of the environment in order to usher in a new era of environmental protection in Taiwan.The purposes of this study are as following.
(1) Using the above literature review, this study probed into time and the influence of rental consumption groups.Through the investigation on lifestyle, this study explored the life and characteristics of hire groups in Taiwan and analyzed their cognition of rental consumption, behavior, and attitude.
(2) Through in-depth interviews and questionnaire surveys, this study probed into the rental patterns and product service systems preferred by Taiwan consumers, as well as consumers' views and expectations of the rental industry.
Research Framework.
In order to probe into lifestyle of rental consumption groups in Taiwan from the perspective of product service systems, this study adopted a literature review, a questionnaire survey on consumer lifestyles, factor analysis, and clustering analysis, in order to recognize the needs of rental consumption groups in Taiwan and generalize the important factors, as shown in Figure 1.Through a literature review, this study explored theories related to product service systems, rental consumption, sustainable product development, and lifestyles.The first stage of interviews was conducted.Using a questionnaire survey, this study analyzed the types and patterns of rental consumption accepted by the Taiwanese, factors of the leases and lifestyles of rental consumption preferred by different groups, and recognized Taiwanese consumers' expectations and thoughts about the rental industry.
Research Subjects.
This study aimed to probe into Taiwanese consumers' views of the rental industry and generalize the lifestyle of potential rental consumption groups in Taiwan.The subjects were consumers with the cognitive and purchasing capability.Subjects include age 18∼55 years old, college educational level, annual income 0.3∼1.2 million NTD, occupation respectively labor industry, professional personnel, student and service industry, and residential places from northern, central, and southern of Taiwan.The questionnaire survey was conducted to screen potential hire groups in order to explore their lifestyles.
Research Design.
This study used a questionnaire survey and the investigation included demographics and the subjects' personal basic information.The subjects' consumption behavior of rental goods demonstrated their purchases, consumption cognition, and consumption attitude, as well as factors of their selection of rental consumption.There were five parts in the questionnaire survey.First, the demographic variable of the subjects mastered the structure of samples and provided enough information for analysis of problems and interpretation of abilities.
Next, rental consumption behavior of the subjects was used to understand the preference of Taiwanese consumers' rental consumption.
Third part, the awareness of the subjects' rental consumption used the Likert scale five forms on the questionnaire survey.Each question had "strongly agree, " "agree, " "average, " "disagree, " and "strongly disagree" with selection of 5∼1 points.The contents of the questionnaire included the leasing concepts, the environmental sustainability, and the sustainable rental consumption.
Fourth part, the subjects' attitudes of rental consumption primarily tried to understand the impact factors of rental consumption.
Finally, this study investigated the subjects' rental of goods in the product service system.The questions were designed according to the dimensions of attitude, interest, and Opinion (A.I.O.) to investigate the consumers' lifestyles.By clustering, the subjects' lifestyle types were clustered to analyze the reactions of different lifestyle and demographic groups in order to find if there were significant rental consumption differences among the groups.
Factor Analysis and Reliability Test of the Lifestyles.
In order to analyze the subjects' different types of lifestyle, by factor analysis, this study simplified 31 questions on the subjects' lifestyles and adopted principal component analysis and varimax in factor analysis for orthogonal rotation, in order to extract the main factors.As to the reliability of the scale of the questionnaire, after reliability analysis, the total reliability was = 0.776, which was higher than 0.7.Thus, the scale of this study was reliable (see Table 1): Factor analysis aims to extract common factors of numerous variables in order to simplify the number of variables.The purpose is to represent a great number of variables using few factors while keeping most of the information in the original variables.Before the extraction of factor analysis, this study conducted the KMO measure of sampling adequacy and Bartlett's test of sphericity in order to find if the data is suitable for factor analysis.Kaiser suggested that when the KMO value is higher, it is more effective after factor analysis [32].A value of at least 0.9 means the effect is extremely appropriate, at least 0.8 means the effect is appropriate, at least 0.7 means the effect is acceptable, and at least 0.6 means the effect is normal.Below 0.5 means the effect is inappropriate.According to Table 2, the KMO = 0.763 and the significance of Bartlett's test of sphericity is 2 = 0.000, indicating that the data were proper: . ( After the test, principal component analysis was conducted to extract the common factors.A total eigenvalue >1 is the standard.The total explained variance was 58.896%, which matched the standard of being >40%.The least eigenvalue was 1.416, which matched the standard eigenvalue of >1, as shown in Table 3. 4: The principles for deleting the items were (1) items with a low factor loading; (2) items that included three common factors; (3) factors that only included two or fewer than two items; and (4) items with low reliability [34].According to the component matrix after rotation in Table 4, Factor 7 and Factor 8 only included one and two items, respectively.The comparison revealed that the factor loading of Factor 7 was lower than that of Factor 8. Thus, Factor 7 was deleted.
By deleting factors having one or fewer than one item, the researcher obtained seven factors.According to the meanings of the items in the factor coefficients shown in Table 4, the factors were named as fashionable, stable and cautious, unique taste, strict budgeting, environmental, interactive consumption, and economic and flexible, as shown in Table 5.
Lifestyle Clusters and Difference
Analysis.The analysis in this section was conducted according to the lifestyle items of the questionnaire.Consumers were divided into different clusters.At the first stage, Ward's method, also known as the minimum variance method = ({ }, { }) = ‖ − ‖ 2 , was adopted.According to the interval of the squared Euclidean distance, this study judged the maximum increase of total variance and the stage in order to recognize the number of clusters [35].As shown in Table 6, when the percentage of the coefficient of concentration was reduced from three to two clusters, the change was the most significant.Thus, there should be three clusters: After the three clusters were decided using Ward's method, this study conducted k-mean clustering analysis and allocated 198 consumers into three lifestyle clusters.According to the results of the three clusters of k-mean clustering analysis, in one cluster, there was only one subject.
With the division of six clusters, two clusters only had one subject.Thus, this study tried to adopt two, four, and five clusters.However, the clustering precision rate needed to be validated by judgment analysis.Based on Table 7, significance 0.000 < = 0.05, and the judgmental capability was significant.This study probed into differences of the factors of the groups and the lifestyle groups using one-way ANOVA and validated the clustering result, as shown in Table 8.
According to the result of ANOVA, when there were five clusters, the values of all factors were < = 0.05.Thus, five clusters are significantly different for seven factors.The different groups were effectively segmented: (5) Finally, the seven factors were divided into five clusters by k-mean clustering, as shown in Table 9.According to the figures, Factor 1 (fashionable) had a significantly positive relation with Cluster 3 and had a significantly negative relation with Cluster 5. Cluster 1 had a positive correlation with Factor 2 (stable and cautious) and Factor 4 (strict budgeting).It had a negative correlation with Factor 1 (fashionable), Factor 3 (unique taste), Factor 5 (environmental), Factor 6 (interactive consumption), and Factor 8 (economic and flexible).Thus, Cluster 1 was more practical and not fashionable, as the subjects did not have unnecessary expenses.Based on the above, Cluster 1 was named simple financial management.
According to Table 9, this study used one-way ANOVA to determine the differences among the groups and factors.According to the characteristics, the clusters were named as follows: Cluster 1: simple financial management; Cluster 2: environment and taste; Cluster 3: fashionable and flexible; Cluster 4: careful purchase; and Cluster 5: smart consumption.After clustering analysis, the cross analysis and chi-square test were conducted to find the significant differences of the groups.The distribution between the group demographics is shown in Table 10.
According to the result of the chi-square test, the lifestyle groups in this study showed a significant difference in age and annual income.The chi-square values of the rest of the items were above the significance level of 0.05, indicating there was no significant difference.In other words, the clusters were not significantly different, as shown in Table 11.
According to the result of the clustering analysis, the demographics of different groups shown in Tables 9 and 10 were compared, as shown in Table 12.
Investigation on Rental Consumption in the Product Service System.
Using the questionnaire survey in the first stage, this study screened potential rental groups.The items of the questionnaire were generalized according to the literatures and included rental consumption behavior, rental consumption cognition, rental consumption attitude, and a lifestyle AIO questionnaire.After integrating the related data, this study treated the result as the criterion of the expected goals.The subjects were consumers with cognitive and purchasing capability.This study focused on consumers above 18 years of age.At the first stage, 206 questionnaires were distributed and 198 valid questionnaires were retrieved.The majority of the questionnaires were online questionnaires, followed by paper-based questionnaires.The aim was to find the rental patterns and types that could be accepted by consumers in Taiwan, as well as the lifestyle factors of the groups that could accept the rental model.
Analysis of the Rental Consumption Behavior in the
Product Service System.According to the figures shown in Table 13, 181 subjects had engaged in rental behavior (91.41%).However, 8.59% subjects had never engaged in leasing behavior.According to the responses for the items of the products, common real estate, publications, and transportations had long been associated with the rental business.At least 80% of the subjects had rental experience.Only 20% of the subjects had rented clothing, outdoor items, and cards, which can be rented in many different places.As to the rental of furniture, 15.66% of the subjects had experience.It was inferred that the furniture in rented rooms was considered to be rented furniture, and this was common for the public in Taiwan.As to various assistive devices, which are expensive and rental of the devices is promoted by the government, only 10.61% subjects had rented them.As to baby items that are renewed frequently, only 8.08% of the subjects had rented them.This shows that the subjects were used to purchasing instead of renting such items, not to mention the rental of electric appliances, daily articles, and live objects, which are rare in the market.The above indicated that Taiwanese consumers are not used to renting goods for short-term usage.The implementation of rental business should be significantly improved.
Analysis of Rental Consumption Cognition of the Product
Service System.According to the figures shown in Table 14, the subjects agreed with the green effectiveness of leasing and they had a positive attitude.Thus, the development of leasing in a product service system could be a new green consumption model.
Analysis on Rental Consumption Attitude of the Product
Service System.According to the figures in Table 15, the most significant conditions for consumers to accept rental consumption were low use frequency and high prices of goods.There should be a clear rental contract and process.Renting is not the traditional consumption model in which ownership changes in trading.There are many extended situations.Thus, the subjects worried that the rented goods would not always be used privately and the conditions were uncertain.They questioned the cleanness and compensation after damage.Leasing firms should be extremely careful about the quality of the rented goods.Clean goods in good condition should not be easily damaged by consumers' common use.
Difference Analysis of Different Lifestyle Groups on Rental
Consumption in the Product Service System.This section explored different groups' rental intention, prices, use frequency, matching of goods, goods propriety, goods renewal, leasing process details, goods exclusiveness, damage of goods, cleanness of goods, and additional services.Regarding the content of the items, this study conducted cross analysis and chi-square test to find the significant differences among the groups.
According to the figures of the 11 items of different lifestyle groups shown in Table 16, the chi-square test result showed that the chi-square value was below the 0.05 significance level.The items that reached a significant difference were item 2 (prices), item 4 (matching of goods), and item 5 (propriety of goods).The rest of the items were not significantly different and were therefore not discussed.
According to the figures of item 2 (prices), all the clusters agreed that the prices of goods were a factor of choosing to rent.There were two meanings of the result.One was that when the goods are expensive, they are more likely to be rented.The other was that the prices of the rented goods should be advantageous in order to attract the consumers.Noticeably, among Cluster 4, up to 20% of the subjects stated they disagreed.It was inferred that they mostly did not have economic advantages and were supported by their families.They did not have economic pressure.Besides, they were concerned about taste, they did not save all their extra money, and they were more likely to spend money on goods that they liked or needed.After careful consideration, they would pay for certain types of products.
Based on the figures of item 4 (matching of goods), all the lifestyle clusters agreed that they would consider obtaining the usage rights of products by renting them when they had to use (or match) different products in different occasions.This showed that they could rent more expensive goods or products that are changed frequently (such as luxury bags) to demonstrate their identities.
According to figures of item 5 (propriety of goods), this study realized that more than half of Cluster 4 and Cluster 5 agreed with careful purchases and being concerned about taste and that they did not mind using second-hand goods.They were careful about using trials before purchasing.Cluster of smart consumption is pragmatic when selecting goods and they obtained products by the most economic measures.They did not mind obtaining the usage rights of goods by renting them.Thus, if the products of the rental firms match the characteristics of these two groups, they can treat the groups as the subjects.
Conclusions
Due to the importance of consumption, the only way to get rid of the unsustainable development crisis for us is to start from changing the consumption pattern of resource exhaustion which can destroy our living environment.Rental is a consumption pattern which can make the goods used repeatedly.As purchase has been changed into rental service, consumers can benefit from the product functions, but the ownership still belongs to the manufacturer.The manufacturer can not only meet the customer demands of the product functions, but also reduce the product output and selling by providing service.In this way, the resource consumption and pollutant output can be decreased.Selling service can achieve the pattern of environmental appeal, which is exactly the way of product service system.This study explores the characteristics of rental goods, integrates the green concept in the design and development end, and introduces the concept of product service into the rental consumption trend in Taiwan.As a result, the sustainable product can be developed for the rental consumption to maximize the green effect.This study takes the questionnaire survey to collect various opinions of the consumers to rental consumption and also classifies the Taiwanese consumers into five clusters based on the life styles, and the names of clusters are simple financial management cluster, environment and taste cluster, fashionable and flexible cluster, careful purchase cluster, and smart consumption cluster.Furthermore, this study details the product service rental patterns and types preferred by each cluster, as well as their opinions and expectations of the rental industry.Finally, the following conclusions are obtained from the phenomenon showed in the statistics and research data.Using Aurora Office Furniture as an example, cradle-tocradle recycling is a huge burden for the enterprise and it will not adopt it.However, the implementation of sustainable goods development of rental consumption of product service system should be assisted by the government and led by large enterprises.It must establish a model rental economy in Taiwan and indirectly influence product selection and operation of consumers and small and medium enterprises in the future.
Table 5 :
Meanings of the names of factors.The subjects are concerned about fashion and change, and they try new things.(ii) They carefully dress themselves to show their extraordinary taste.(iii) They reward themselves by purchasing luxury goods.(iv) They are careful about the quality of the goods.FactorStable and cautious (i) The subjects are more conservative and good at financial management.(ii)They do not have unnecessary dreams or expenses.(iii)They always make plans and are satisfied with their current lives.FactorUnique taste (i) The subjects have their own opinions.(ii)They enjoy challenging work and are concerned about the taste of life.(iii)They do not mind using secondhand goods.(iv)Before purchasing goods, they prefer having a trial period, and they believe that they can obtain a life with personal style and unique taste from a flea market.Factor Strict budgeting (i) The subjects prefer purchasing goods by the most practical measures.(ii) They do not care about fashion.(iii) They enjoy classic and resistant patterns.(iv) They do not have unnecessary expenses.(v) They save extra money in the banks.Factor Environmental (i) The subjects are concerned about the environment and ecology.(ii) They use their own shopping bags, cups, and tableware.(iii) They avoid goods that are only used once.(iv) They treat the environment as a priority when purchasing products.Factor Interactive consumption (i) The subjects have frequent interaction.(ii) Besides offering the latest consumption information to relatives and friends, they are careful about green information and recommend environmentally friendly goods.(iii) They engage in a purchase behavior that has the most economic effectiveness.(iv) They use coupons and wait for discount periods to buy goods.Factor Economic and flexible (i) The subjects prefer flexible purchases of products.(ii) They are good at paying with credit cards.(iii) They obtain usage rights with little money and treat them as a flexible measure to keep their money.(iv)They prefer rentals instead of buying goods.
Table 1 :
Analysis of the total reliability of the lifestyle scale.
Table 2 :
KMO and Bartlett's test of lifestyle scale.
Table 3 :
Eigenvalue of factors, explained variance after rotation, and cumulative explained variance of the lifestyle scale.
Table 4 :
Component matrix of the lifestyle scale of principal component analysis after rotation.
Table 6 :
Coefficient of concentration of Ward's method.
Table 7 :
Validation result of clustering by judgment analysis.
Table 9 :
Factors of the lifestyle clusters and means of coefficients.
[33]his study conducted orthogonal rotation and reduced 31 questions into eight factors.According to the significance principle of factor loading proposed by Hair et al.[33], a factor loading that reaches 0.3 is acceptable. Te questions used for the factors in this study are shown in Table
Table 10 :
Distribution of the demographics of different groups.
Table 11 :
Significance of the Pearson chi-square test.
Table 13 :
Scale cross table of the subjects' rental consumption behavior.
Table 14 :
Scale cross table of the subjects' rental consumption cognition.
Table 15 :
Scale cross table of the subjects' rental consumption attitude.
Table 16 :
Distribution of figures of 11 items in different lifestyle groups.
Consumers' Green Consumption Cognition and Attitude toward Environmental Goods, Control the Green Consumption Factors of Rental Goods Development.Green consumption and environmental goods influence each other.In the use of rented goods, environmental effectiveness can be enhanced by increasing the use rate.Thus, if goods are consumed using a rental model, such action can be seen as green consumption.Consumers agree with this concept; thus, rental goods have environmental implications for consumers.However, consumers have different market characteristics.At the early stage of development of rental products, surveys must be conducted in order to recognize the different subject preferences for goods.At present, consumers in Taiwan worry about situations related to the change of ownership of rental consumption.Thus, if the rental characteristics of goods can be reinforced at the stage of development and if the development of rental goods is guided by the extracted factors, the total value of rented goods can be effectively upgraded.4.2.In a Rental Market, Market Segmentation Can Be Reinforced by LifestyleVariables.According to interviews with the enterprises, leasing firms suggested that an important measure to develop rental goods is product market segmentation.This study focused on consumers with purchase capabilities and extracted lifestyle factors by factor analysis in AIO to find 31 items.Through clustering analysis, the factors were divided into five clusters.It was found that the public is willing to acquire the usage rights of products by rental consumption.Thus, rental consumption in Taiwan can be economically effective.Difference analysis of the lifestyle survey in this study indicated the different cluster preferences in detail.Thus, lifestyle surveys could help probe into rental consumption groups' preferences, could serve as a reference for market segmentation, and allow firms to easily deal with marketing.4.3.In Development of Rental Consumption Characteristics of Product Service System, Consumers' Opinions Should Be Included in Development Conditions and It Must ReinforceProduct Service.As to sustainable goods developed upon rental consumption of product Service System, the main concerns are extension of the product life cycle of components and processed waste returned to the development end.Besides, users' feedback after using the products is also important for the development of the next stage.From consumers' perspective, users use rental goods more frequently, in comparison to goods purchased.It is difficult to predict users' usage.Thus, when developing products, design engineers should be concerned about the new lifestyle of rental goods groups in order to reinforce services of the products.This study generalizes sustainable product development principles of rental consumption of product service system as new rules for developing rental goods.Recycling and usage are particularly critical.Consumers' feedback can be provided by information platform.4.4.Cost of Cradle-to-Cradle Recycling Pattern Is High: Governmental Assistance and Promotion Will Help Construct Sustainable Consumption Model of Rental Consumption inProduct Service System and Influence Consumers' and Enterprises' Selection of the Rental Business Model.According to expert interview and analytical result, in overall green regulations, rental consumption model should particularly modify usage stage and waste recycling stage.It is the result of the change of ownership.However, not all types of firms can accomplish cradle-to-cradle recycling model. | 2018-12-08T14:42:27.787Z | 2013-12-26T00:00:00.000 | {
"year": 2013,
"sha1": "e7d09cc555bfcd8568eb3425c2742e6bc909ce45",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2013/710981.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e7d09cc555bfcd8568eb3425c2742e6bc909ce45",
"s2fieldsofstudy": [
"Business",
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
} |
251787422 | pes2o/s2orc | v3-fos-license | Magnetic resonance imaging for fistulography in perianal fistula: clinicoradiological correlation
arise in cases of recurrent and complex fistulae. A fistula which seems complex on physical examination should be evaluated with radio-diagnostic techniques. applied for evaluation of fistula in ano; conventional fistulography ABSTRACT Background: This article aims to review the role of magnetic resonance imaging (MRI) fistulography in evaluation of perianal fistula along with its concordance with clinical examination and impact on surgical intervention. Methods: A retrospective study of 61 patients who underwent surgery for anal fistula in RMLIMS collected from database from January 1, 2017 to September, 2021 Results: The study showed a significant MRI contribution to clinical evaluation in 65.6%. MRI provided significant information for complex fistulas than for simple fistulas (45% vs. 14.6%, p=0.01). Proportion of patients with significant MRI contribution increased with increasing Parks grade (grade 1, 8.3%; grade 2, 52.2%, p=0.001). The concordance between St. James Hospital grade and Parks classification was 0.768 (Kappa coefficient, p<0.00). Conclusions: Therefore, we propose inclusion of MRI in the preoperative surgical assessment of anal fistulas when recurrent, complex, high grade, or when the external opening is located more than 2 cm from the anal canal.
INTRODUCTION
Perianal fistula (PAF) is abnormal tract communicating an external cutaneous opening in the perianal region to an internal opening, most often in the anal canal. 1 PAF is one of common anorectal disorders in surgical practice with high prevalence, which predominantly affects young adult males. 2,3 Most fistulas (approximately 90% of them) non-specific, of cryptoglandular origin resulting from an infection of anal glands. 4 The rest occur are due to a specific etiology like tuberculosis, Crohn's disease, ulcerative colitis, pelvic infections, radiations, carcinomas, and trauma to anorectal region. 5 The classification of fistula in ano, proposed by sir Allen Parks in 1976 is by far the most followed classification dividing the anal fistula into intersphincteric, transsphincteric, suprasphincteric, and extrasphincteric variety. 6 Standard practice task force (SPTF), by the American society of colon and rectal surgeons, classified fistulas as "simple" and "complex"; latter identifying the increased risk for incontinence after surgery (Table 1). 7 For successful management of fistula, it is important to delineate the complete anatomy of the fistula which includes the correct identification of internal opening, the primary site of cryptoglandular infection, and the course of the primary and secondary tracks or abscesses if any. Failure to identify may result in recurrence. In cases of simple fistulas, this identification is possible with a careful digital rectal examination (preferably bi-digital). However, problems arise in cases of recurrent and complex fistulae. A fistula which seems complex on physical examination should be evaluated with radiodiagnostic techniques. 8 was used but its diagnostic yield is limited due to its difficulty to recognize the internal opening. 9,10 Endosonography with color Doppler has greater diagnostic value for PAF evaluation. 11 Three-dimensional ultrasonography (3D US) improves PAF detection and delineation, hence it plays a crucial role in optimal treatment planning, but expertise is one of its limitations. 12 Transperineal US is an accurate diagnostic method, due to its simplicity and low cost it is recommended as 1 st diagnostic modality for anal fistula. 13 MRI use in anal fistulas was first reported in the early 1990s which showed 87.5% concordance with the surgery. 14 The association of coloproctology of great Britain and Ireland defined MRI as an imaging technique with high sensitivity and specificity for the diagnosis of the primary fistula tract and recommended it for imaging assessment of the complex or recurrent fistulas. 8 Owing to high soft tissue resolution of MRI, localization of internal opening of anal fistula, definition of primary and secondary tracts and their relationships with the sphincter complex, and presence of horseshoe fistulas and abscesses can be more accurately depicted preoperatively compared with physical examination. 15 A classification based on MRI findings was also developed by St. James hospital ( Table 1). 16 The objective of this article is to review the role of MRI fistulography in the diagnosis and evaluation of fistula in ano along with its concordance with clinical examination and impact on surgical intervention.
METHODS
This retrospective study was conducted in the department of surgical gastroenterology, Dr. Ram Manohar Lohia institute of medical sciences, Lucknow. All patients who were operated for Fistula in Ano in the department between the below mentioned period were included in the study. Data of all patients who underwent surgery for anal fistula from January 1, 2017 to September, 2021 in department of surgical gastroenterology was collected from a database management system hence ethical committee approval is not required for our study. It included the physical examination notes, preoperative surgical plan, MRI findings and operative findings derived from the personal identifiers, which were retrieved from the electronic records department of the hospital. The following characteristics were assessed for each fistula-in-ano: the location of primary tracts, the presence of secondary tracts and abscess formation and the site of internal and external openings. Fistulas were classified according to the Parks and St. James's university hospital classifications. 6,16 In the image interpretation, it was assumed that a fluid collection larger than 10 mm in diameter with rim enhancement on post-contrast T1W TSE images was an abscess as per the criteria of Singh et al and Torkzad et al. 17,18 All surgeries were performed by or under the supervision of surgeons with at least 5 years of experience in Surgical gastroenterology. During surgery, the characteristics of each fistula-in-ano were also carefully documented, Parks grade and SPTF classifications were obtained from the operative notes and then used as a reference standard to compare to MRI findings.
Statistical analysis
For the primary endpoint, the study aims to determine the clinical characteristics (history and physical examination) that are likely to benefit from preoperative MRI. The study cohort of 61 patients (categorized into significant vs. non-significant MRI contribution groups) provides 80% power with 5% type I error level to statistically identify significant differences ranging between 15% and 25% for the clinical findings observed in these two groups. As a secondary endpoint, the concordance between the classification schemes with and without the use of information from MRI (Parks and St. James classifications, respectively) was analyzed.
Descriptive statistics were provided as mean and standard deviation for age and as percentages for the categorical variables. The concordance between the two grading schemes was analyzed using Kappa coefficient. The difference between groups was analyzed using chi-square or Fisher's test for nominal variables and Mantel-Haenszel test for ordinal variables. A p<0.05 was used as the cutoff to infer statistical significance.
RESULTS
The total number of eligible patients was 61. There were 51 females (83.6%). In total, 15 patients suffered from recurrent fistulas (24.6%). MRI was concordant with operative findings in 83.1% of the patients ( Table 2).
MRI contribution to clinical evaluation was significant in 65.6% (40/61) of the patients. MRI more frequently provided significant information for complex fistulas than for simple fistulas (45% vs. 14.6%, p=0.01). Proportion of patients with significant MRI contribution increased with increasing Parks grade (grade 1, 8.3%; grade 2, 52.2%, p=0.001). Preoperative MRI contribution was also more frequent if the external opening was more than 2 cm away from the anal canal (28.9% vs. 9.5%) but the results were not found to be significant. Although not statistically significant contribution of MRI was slightly more for recurrent fistulas than for primary fistulas (40% significant contribution vs. 19.6%, p=0.11) ( Table 3). The concordance between St. James hospital grade and Parks classification 0.768 (Kappa coefficient, p<0.00).
DISCUSSION
The surgical treatment of anal fistula requires identification of primary as well as secondary tracts and relation with the sphincteric musculature for proper management of the fistula and drainage of any abscess, if present. Physical examination alone may not be enough to delineate these features and recurrence is usually due to missed infective foci at the first surgery. [19][20][21] MRI is the most accurate imaging modality to define anal canal anatomy and anal fistulae. 22,23 With 61 patients, our study identifies the group of patients for which MRI fistulography significantly contributes to the surgical management of the disease. In our study, MRI provided important additional information for nearly one-third of the patients. Detection of higher Parks grades, distance of external opening of the fistula from the anal canal and complex fistulas are indicative of significant MRI contribution following clinical examination.
Garg et al in a study evaluating MRI contribution to surgical management in 229 patients reported that MRI added significant information in patients with additional tracts, horseshoe tracts, supralevator extension, unsuspected abscess, and multiple internal openings. 24 Using these parameters, they inferred that MRI added significant information to 46.7% of the surgeries. In a study by Beets-Tan et al when the investigators delivered MRI results to the surgeon just before his decision to conclude the surgery, the surgeon decided to continue the surgery in 21% (12/56) of patients based on information obtained from the MRI. 25 In our study, MRI changed the operation when it identified fistula characteristics, which could not be identified by physical examination or when the fistula grade was assessed to be higher than that of Parks classification after MRI. With these criteria, MRI changed the management in 24.6%. We have also shown a significant contribution of MRI in detecting complex fistulas. This is mainly due to the increased incidence of blind tracts in Parks grade 3 and 4 or complex fistulas. The association of coloproctology of Great Britain and Ireland recommends preoperative MRI for recurrent and complex fistulae. 2 The parameters for complex fistulas are listed in Table 1. Especially for primary fistulas, predicting whether a fistula is complex or not preoperatively may be difficult with physical examination alone. 29 In our experience if the external opening is farther away from the anal canal, the fistula tends to have a more complex course. In our research, the benefit of MRI was significantly more for fistulas in which external opening was more than 2 cm far from the anal canal. In some fistulas, the location of the external opening may be the only physical examination finding; thus, our finding may be important to justify a preoperative MRI for this group of patients.
We found 76.8% concordance between St James hospital grade and Parks classification. This confirms that the two assessments are correlated but not equally informative. The correlation of MRI findings with operative findings was investigated in other studies and ranged from 89% to 100%. 19,[26][27][28] Recurrence of anal fistula is the only widely accepted indication for preoperative MRI evaluation currently. In our study, we observed that MRI significantly contributed to 40.05% of the cases.
The limitation of this study is that the data is being evaluated retrospectively, representing our past experience with preoperative MRI for primary fistulas.
Although we can precisely identify the cases for which MRI provided additional information to the clinical examination and intraoperative findings, we could not define prospectively for which patients the surgical management has definitely changed.
CONCLUSION
In conclusion, our study is valuable in linking the findings of preoperative clinical examination and surgical exploration with preoperative MRI findings for the surgical management of anal fistulas. Therefore, we propose inclusion of MRI in the preoperative surgical assessment of anal fistulas when they are recurrent, complex, high grade, or when the external opening is located more than 2 cm from the anal canal. | 2022-08-25T15:09:48.300Z | 2022-08-23T00:00:00.000 | {
"year": 2022,
"sha1": "247e5b685e5b626fe17fc4c7e1d0173d268812e1",
"oa_license": null,
"oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/8992/5436",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "96e19428c481ca15d442d16c3ec87420012ebf10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.